Using Events
An event is a mechanism for emitting notifications about specific actions or state changes that occur within a blockchain runtime. Events are typically used to inform the outside world about occurrences such as token transfers, account creations, or other significant operations within the blockchain.
#[pallet::event]
#[pallet::generate_deposit(pub(super) fn deposit_event)]
pub enum Event<T: Config> {
/// A user has successfully set a new value.
SomethingStored {
/// The new value set.
something: u32,
/// The account who set the new value.
who: T::AccountId,
},
}
#[pallet::call_index(0)]
#[pallet::weight(T::WeightInfo::do_something())]
pub fn do_something(origin: OriginFor<T>, something: u32) -> DispatchResult {
// Check that the extrinsic was signed and get the signer.
let who = ensure_signed(origin)?;
// Update storage.
Something::<T>::put(something);
// Emit an event.
Self::deposit_event(Event::SomethingStored { something, who });
// Return a successful `DispatchResult`
Ok(())
}
Quiz
Declaring a StorageMap
We declare a single storage map with the following syntax:
#[pallet::storage]
#[pallet::getter(fn simple_map)]
pub(super) type SimpleMap<T: Config> =
StorageMap<_, Blake2_128Concat, T::AccountId, u32, ValueQuery>;
Explanation of the code:
-
SimpleMap
- the name of the storage map -
#[pallet::getter(fn simple_map)]
- a getter functionsimple_map
is created using pallet getter macros. -
Blake2_128Concat
- its an hasher used in map. More on this below.
Map contains key and its value:
-
T::AccountId
- its the data type of key of the map. -
u64
- its the data type of value of the map. -
ValueQuery
- If you omitValueQuery
, when interacting with a simple map, you will get anOption<u32>
, which means that if you try to get a value from yourStorageMap
, you will get eitherSome(value)
orNone
. UsingValueQuery
will always return a value, so you don't have to deal with unwrapping theget
calls.
Choosing a Hasher
Hasher to use to hash keys to insert to storage.
Although the syntax above is complex, most of it should be straightforward if you've understood the
recipe on storage values. The last unfamiliar piece of writing a storage map is choosing which
hasher to use. In general you should choose one of the three following hashers. The choice of hasher
will affect the performance and security of your chain. If you don't want to think much about this,
just choose Blake2_128Concat
and skip to the next section.
Blake2_128Concat
This is a cryptographically secure hash function, and is always safe to use. It is reasonably
efficient, and will keep your storage tree balanced. You must choose this hasher if users of your
chain have the ability to affect the storage keys. In this pallet, the keys are AccountId
s. At
first it may seem that the user doesn't affect the AccountId
, but in reality a malicious user
can generate thousands of accounts and use the one that will affect the chain's storage tree in the
way the attacker likes. For this reason, we have chosen to use the Blake2_128Concat
hasher.
Twox64Concat
This hasher is not cryptographically secure, but is more efficient than blake2. Thus it represents trading security for performance. You should not use this hasher if chain users can affect the storage keys. However, it is perfectly safe to use this hasher to gain performance in scenarios where the users do not control the keys. For example, if the keys in your map are sequentially increasing indices and users cannot cause the indices to rapidly increase, then this is a perfectly reasonable choice.
Identity
The Identity
"hasher" is really not a hasher at all, but merely an
identity function that returns the same value it
receives. This hasher is only an option when the key type in your storage map is already a hash,
and is not controllable by the user. If you're in doubt whether the user can influence the key just
use blake2.
The Storage Map API
Documentation
This pallet demonstrated some of the most common methods available in a storage map including
insert
, get
, take
, and contains_key
.
// Insert
<SimpleMap<T>>::insert(&user, entry);
// Get
let entry = <SimpleMap<T>>::get(account);
// Take
let entry = <SimpleMap<T>>::take(&user);
// Contains Key
<SimpleMap<T>>::contains_key(&user)
// Mutate
<SimpleMap<T>>::mutate(&user, |entry_option| {
*entry_option = Some(entry);
});
insert
and mutate
When deciding between mutate
and insert
to update storage, consider the following:
Insert
performs a simple write operation to the database, which is the more efficient option.
On the other hand, mutate
involves a read operation followed by a write, making it a more expensive database operation.
Therefore, when you have the option to use insert
(i.e., you don't need to read the existing value), it's recommended to use insert
over mutate
.
Insert
is suitable for inserting or overwriting an existing value. If you simply want to store a specific value, insert
is the way to go.
Mutate
, however, is designed for scenarios where you need to modify the existing value or make decisions based on its current state. Use mutate
when you need to perform conditional updates or modifications that depend on the current value."
Quiz
Dev Mode
Dev mode allows you to write code without assigning weights to functions. Weights are an essential mechanism for measuring and limiting usage, establishing an economic incentive structure, preventing network overload, and mitigating DoS vulnerabilities. Weights are calculated during benchmarking.
If you want to write functions without doing benchmarking, you can use dev mode. You can write the benchmark later on, once you've completed the prototyping and testing.
To convert your pallet to dev mode, use #[frame_support::pallet(dev_mode)]
Use:
#[frame_support::pallet(dev_mode)]
pub mod pallet {
instead of
#[frame_support::pallet]
pub mod pallet {
You can write functions without assigning any weight using #[pallet::weight(0)]
#[pallet::call]
impl<T: Config> Pallet<T> {
#[pallet::call_index(0)]
#[pallet::weight(0)]
pub fn do_something(origin: OriginFor<T>, something: u32) -> DispatchResult {
let who = ensure_signed(origin)?;
Something::<T>::put(something);
Self::deposit_event(Event::SomethingStored { something, who });
Ok(())
}
Quiz
Cache Multiple Calls
Calls to runtime storage have an associated cost and developers should strive to minimize the number of calls.
#[pallet::storage]
#[pallet::getter(fn some_copy_value)]
pub(super) type SomeCopyValue<T: Config> = StorageValue<_, u32>;
#[pallet::storage]
#[pallet::getter(fn king_member)]
pub(super) type KingMember<T: Config> = StorageValue<_, T::AccountId>;
#[pallet::storage]
#[pallet::getter(fn group_members)]
pub(super) type GroupMembers<T: Config> = StorageValue<_, Vec<T::AccountId>>;
Copy Types
For Copy
types, it is easy to reuse
previous storage calls by simply reusing the value, which is automatically cloned upon reuse. In the
code below, the second call is unnecessary:
pub fn increase_value_no_cache(
origin: OriginFor<T>,
some_val: u32,
) -> DispatchResultWithPostInfo {
let _ = ensure_signed(origin)?;
let original_call = <SomeCopyValue<T>>::get();
let some_calculation = original_call
.unwrap()
.checked_add(some_val)
.ok_or("addition overflowed1")?;
// this next storage call is unnecessary and is wasteful
let unnecessary_call = <SomeCopyValue<T>>::get();
// should've just used `original_call` here because u32 is copy
let another_calculation = some_calculation
.checked_add(unnecessary_call.unwrap())
.ok_or("addition overflowed2")?;
<SomeCopyValue<T>>::put(another_calculation);
let now = <frame_system::Pallet<T>>::block_number();
Self::deposit_event(Event::InefficientValueChange(another_calculation, now));
Ok(().into())
}
Instead, the initial call value should be reused. In this example, the SomeCopyValue
value is
Copy
so we should prefer the following
code without the unnecessary second call to storage:
pub fn increase_value_w_copy(
origin: OriginFor<T>,
some_val: u32,
) -> DispatchResultWithPostInfo {
let _ = ensure_signed(origin)?;
let original_call = <SomeCopyValue<T>>::get();
let some_calculation = original_call
.unwrap()
.checked_add(some_val)
.ok_or("addition overflowed1")?;
// uses the original_call because u32 is copy
let another_calculation = some_calculation
.checked_add(original_call.unwrap())
.ok_or("addition overflowed2")?;
<SomeCopyValue<T>>::put(another_calculation);
let now = <frame_system::Pallet<T>>::block_number();
Self::deposit_event(Event::BetterValueChange(another_calculation, now));
Ok(().into())
}
Clone Types
If the type was not Copy
, but was Clone
,
then it is still better to clone the value in the method than to make another call to runtime
storage.
The runtime methods enable the calling account to swap the T::AccountId
value in storage if
- the existing storage value is not in
GroupMembers
AND - the calling account is in
GroupMembers
The first implementation makes a second unnecessary call to runtime storage instead of cloning the
call for existing_key
:
pub fn swap_king_no_cache(origin: OriginFor<T>) -> DispatchResultWithPostInfo {
let new_king = ensure_signed(origin)?;
let existing_king = <KingMember<T>>::get();
// only places a new account if
// (1) the existing account is not a member &&
// (2) the new account is a member
ensure!(
!Self::is_member(&existing_king.unwrap()),
"current king is a member so maintains priority"
);
ensure!(
Self::is_member(&new_king),
"new king is not a member so doesn't get priority"
);
// BAD (unnecessary) storage call
let old_king = <KingMember<T>>::get();
// place new king
<KingMember<T>>::put(new_king.clone());
Self::deposit_event(Event::InefficientKingSwap(old_king.unwrap(), new_king));
Ok(().into())
}
If the existing_key
is used without a clone
in the event emission instead of old_king
, then
the compiler returns the following error:
error[E0382]: use of moved value: `new_king`
--> pallets/storage-cache/src/lib.rs:190:79
|
168 | let new_king = ensure_signed(origin)?;
| -------- move occurs because `new_king` has type `<T as frame_system::Config>::AccountId`, which does not implement the `Copy` trait
...
188 | <KingMember<T>>::put(new_king);
| -------- value moved here
189 |
190 | Self::deposit_event(Event::InefficientKingSwap(old_king.unwrap(), new_king));
| ^^^^^^^^ value used here after move
|
help: consider cloning the value if the performance cost is acceptable
|
188 | <KingMember<T>>::put(new_king.clone());
| ++++++++
For more information about this error, try `rustc --explain E0382`.
error: could not compile `pallet-storage-cache` (lib) due to 1 previous error
Fixing this only requires cloning the original value before it is moved:
pub fn swap_king_with_cache(origin: OriginFor<T>) -> DispatchResultWithPostInfo {
let new_king = ensure_signed(origin)?;
let existing_king = <KingMember<T>>::get();
// prefer to clone previous call rather than repeat call unnecessarily
let old_king = existing_king.clone();
// only places a new account if
// (1) the existing account is not a member &&
// (2) the new account is a member
ensure!(
!Self::is_member(&existing_king.unwrap()),
"current king is a member so maintains priority"
);
ensure!(
Self::is_member(&new_king),
"new king is not a member so doesn't get priority"
);
// <no (unnecessary) storage call here>
// place new king
<KingMember<T>>::put(new_king.clone());
Self::deposit_event(Event::BetterKingSwap(old_king.unwrap(), new_king));
Ok(().into())
}
Not all types implement Copy
or
Clone
, so it is important to discern other
patterns that minimize and alleviate the cost of calls to storage.
Quiz
Using Vectors as Sets
A Set is an unordered data structure that stores entries without duplicates. Substrate's storage API does not provide a way to declare sets explicitly, but they can be implemented using either vectors or maps.
This recipe demonstrates how to implement a storage set on top of a vector, and explores the
performance of the implementation. When implementing a set in your own runtime, you should compare
this technique to implementing a map-set
.
In this pallet we implement a set of AccountId
s. We do not use the set for anything in this
pallet; we simply maintain the set. Using the set is demonstrated in the recipe on
pallet coupling. We provide dispatchable calls to add and remove members,
ensuring that the number of members never exceeds a hard-coded maximum.
/// A maximum number of members. When membership reaches this number, no new members may join.
pub const MAX_MEMBERS: usize = 16;
Storage Item
We will store the members of our set in a Rust
Vec
. A Vec
is a collection of elements that
is ordered and may contain duplicates. Because the Vec
provides more functionality than our set
needs, we are able to build a set from the Vec
. We declare our single storage item as so
#[pallet::storage]
#[pallet::getter(fn members)]
pub(super) type Members<T: Config> = StorageValue<_, Vec<T::AccountId>, ValueQuery>;
In order to use the Vec
successfully as a set, we will need to manually ensure that no duplicate
entries are added. To ensure reasonable performance, we will enforce that the Vec
always remains
sorted. This allows for quickly determining whether an item is present using a
binary search.
Adding Members
Any user may join the membership set by calling the add_member
dispatchable, providing they are
not already a member and the membership limit has not been reached. We check for these two
conditions first, and then insert the new member only after we are sure it is safe to do so. This is
an example of the mnemonic idiom, "verify first write last".
pub fn add_member(origin: OriginFor<T>) -> DispatchResult {
let new_member = ensure_signed(origin)?;
let mut members = Members::<T>::get();
ensure!(members.len() < MAX_MEMBERS, Error::<T>::MembershipLimitReached);
// We don't want to add duplicate members, so we check whether the potential new
// member is already present in the list. Because the list is always ordered, we can
// leverage the binary search which makes this check O(log n).
match members.binary_search(&new_member) {
// If the search succeeds, the caller is already a member, so just return
Ok(_) => Err(Error::<T>::AlreadyMember.into()),
// If the search fails, the caller is not a member and we learned the index where
// they should be inserted
Err(index) => {
members.insert(index, new_member.clone());
Members::<T>::put(members);
Self::deposit_event(Event::MemberAdded(new_member));
Ok(())
}
}
}
If it turns out that the caller is not already a member, the binary search will fail. In this case
it still returns the index into the Vec
at which the member would have been stored had they been
present. We then use this information to insert the member at the appropriate location, thus
maintaining a sorted Vec
.
Removing a Member
Removing a member is straightforward. We begin by looking for the caller in the list. If not present, there is no work to be done. If the caller is present, the search algorithm returns her index, and she can be removed.
fn remove_member(origin: OriginFor<T>) -> DispatchResult {
let old_member = ensure_signed(origin)?;
let mut members = Members::<T>::get();
// We have to find out where, in the sorted vec the member is, if anywhere.
match members.binary_search(&old_member) {
// If the search succeeds, the caller is a member, so remove her
Ok(index) => {
members.remove(index);
Members::<T>::put(members);
Self::deposit_event(Event::MemberRemoved(old_member));
Ok(())
},
// If the search fails, the caller is not a member, so just return
Err(_) => Err(Error::<T>::NotMember.into()),
}
}
Performance
Now that we have built our set, let's analyze its performance in some common operations.
Membership Check
In order to check for the presence of an item in a vec-set
, we make a single storage read, decode
the entire vector, and perform a binary search.
DB Reads: O(1) Decoding: O(n) Search: O(log n)
Updating
Updates to the set, such as adding and removing members as we demonstrated, requires first
performing a membership check. It also requires re-encoding the entire Vec
and storing it back in
the database. Finally, it still costs the normal
amortized constant time associated with mutating a
Vec
.
DB Writes: O(1) Encoding: O(n)
Iteration
Iterating over all items in a vec-set
is achieved by using the Vec
's own
iter
method. The entire set can
be read from storage in one go, and each item must be decoded. Finally, the actual processing you do
on the items will take some time.
DB Reads: O(1) Decoding: O(n) Processing: O(n)
Because accessing the database is a relatively slow operation, reading the entire list in a single
read is a big win. If you need to iterate over the data frequently, you may want a vec-set
.
A Note on Weights
It is always important that the weight associated with your dispatchables represent the actual time it takes to execute them. In this pallet, we have provided an upper bound on the size of the set, which places an upper bound on the computation - this means we can use constant weight annotations. Your set operations should either have a maximum size or a custom weight function that captures the computation appropriately.
Using Maps as Sets
A Set is an unordered data structure that stores entries without duplicates. Substrate's storage API does not provide a way to declare sets explicitly, but they can be implemented using either vectors or maps.
This recipe shows how to implement a storage set on top of a map, and explores the performance of
the implementation. When implementing a set in your own runtime, you should compare this technique
to implementing a vec-set
.
In this pallet we implement a set of AccountId
s. We do not use the set for anything in this
pallet; we simply maintain its membership. Using the set is demonstrated in the recipe on
pallet coupling. We provide dispatchable calls to add and remove members,
ensuring that the number of members never exceeds a hard-coded maximum.
/// A maximum number of members. When membership reaches this number, no new members may join.
pub const MAX_MEMBERS: u32 = 16;
Storage Item
We will store the members of our set as the keys in one of Substrate's
StorageMap
s. There is also
a recipe specifically about using storage maps. The storage map itself does not
track its size internally, so we introduce a second storage value for this purpose.
#[pallet::storage]
#[pallet::getter(fn members)]
pub(super) type Members<T: Config> =
StorageMap<_, Blake2_128Concat, T::AccountId, (), ValueQuery>;
#[pallet::storage]
pub(super) type MemberCount<T> = StorageValue<_, u32, ValueQuery>;
The value stored in the map is ()
because we only care about the keys.
Adding Members
Any user may join the membership set by calling the add_member
dispatchable, so long as they are
not already a member and the membership limit has not been reached. We check for these two
conditions first, and then insert the new member only after we are sure it is safe to do so.
fn add_member(origin: OriginFor<T>) -> DispatchResult {
let new_member = ensure_signed(origin)?;
let member_count = MemberCount::get();
ensure!(member_count < MAX_MEMBERS, Error::<T>::MembershipLimitReached);
// We don't want to add duplicate members, so we check whether the potential new
// member is already present in the list. Because the membership is stored as a hash
// map this check is constant time O(1)
ensure!(!Members::<T>::contains_key(&new_member), Error::<T>::AlreadyMember);
// Insert the new member and emit the event
Members::<T>::insert(&new_member, ());
MemberCount::put(member_count + 1); // overflow check not necessary because of maximum
Self::deposit_event(RawEvent::MemberAdded(new_member));
Ok(())
}
When we successfully add a new member, we also manually update the size of the set.
Removing a Member
Removing a member is straightforward. We begin by looking for the caller in the list. If not present, there is no work to be done. If the caller is present, we simply remove them and update the size of the set.
fn remove_member(origin: OriginFor<T>) -> DispatchResult {
let old_member = ensure_signed(origin)?;
ensure!(Members::<T>::contains_key(&old_member), Error::<T>::NotMember);
Members::<T>::remove(&old_member);
MemberCount::mutate(|v| *v -= 1);
Self::deposit_event(RawEvent::MemberRemoved(old_member));
Ok(())
}
Performance
Now that we have built our set, let's analyze its performance in some common operations.
Membership Check
In order to check for the presence of an item in a map set, we make a single storage read. If we only care about the presence or absence of the item, we don't even need to decode it. This constant time membership check is the greatest strength of a map set.
DB Reads: O(1)
Updating
Updates to the set, such as adding and removing members as we demonstrated, requires first performing a membership check. Additions also require encooding the new item.
DB Reads: O(1) Encoding: O(1) DB Writes: O(1)
If your set operations will require a lot of membership checks or mutation of individual items, you
may want a map-set
.
Iteration
Iterating over all items in a map-set
is achieved by using the
IterableStorageMap
trait,
which iterates (key, value)
pairs (although in this case, we don't care about the values). Because
each map entry is stored as an individual trie node, iterating a map set requires a database read
for each item. Finally, the actual processing of the items will take some time.
DB Reads: O(n) Decoding: O(n) Processing: O(n)
Because accessing the database is a relatively slow operation, returning to the database for each
item is quite expensive. If your set operations will require frequent iterating, you will probably
prefer a vec-set
.
A Note on Weights
It is always important that the weight associated with your dispatchables represent the actual time it takes to execute them. In this pallet, we have provided an upper bound on the size of the set, which places an upper bound on the computation - this means we can use constant weight annotations. Your set operations should either have a maximum size or a custom weight function that captures the computation appropriately.