ink
ink copied to clipboard
Reduce Wasm code bloat of emitting ink! events
Currently emitting ink! events is very costly with respect to Wasm file sizes. The Wasm file size bloat primarily stems from the fact that upon emitting an event we first create an event instances that we immediately encode. The creation process under Wasm is very costly if an event instance has a large memory footprint. In the ERC-20 example contract we have several events with a structure like:
#[ink(event)]
pub struct Transferred {
from: Option<AccountId>,
to: Option<AccountId>,
amount: Balance,
}
Where AccountId
is 256 bit and Balance
is 128 bit wide. This means that even in packed form the Transferred
struct has a memory footprint of at least 642 bits. Creating instances of this type in Wasm is extremely costly since Wasm has to use the shadow stack for big types like these. What we should do instead is to use references where possible, however:
#[ink(event)]
pub struct Transferred<'a> {
from: Option<&'a AccountId>,
to: Option<&'a AccountId>,
amount: &'a Balance,
}
Is currently not an allowed event definition since we require ink! events to be scale::Encode
and scale::Decode
where the latter requires Self: 'static
which is not true for the event definition with references.
Maybe we could drop the latter trait bound since we do not really require events to be decodable - that's just a nice-to-have feature.
Note though that the reference-based event definition still has a memory footprint of at least 98 bits which still requires Wasm to use the shadow stack.
OR we may be able to emit event without constructing an event instance in the first place by creating intelligent constructors that work with EncodeLike<T>
for each event field instead of T
so it is possible to feed references instead.
This potential solution would probably yield the best results since we won't be required to create short-lived instances of potentially big types.
As an experiment I crudely removed the topics codegen, which reduced erc20
based on https://github.com/paritytech/ink/pull/946 from 12.0KB
to 9.9KB
. As a comparison with all calls to emit_event
commented out, we are at 9.1KB
.
Maybe we should take into account https://github.com/paritytech/ink/issues/809 during implementation of this issue=)
A promising experiment: I managed to get 8.0K
(down from 11.0K
) from the latest master
with making everything a reference:
/// Event emitted when a token transfer occurs.
#[ink(event)]
pub struct Transfer<'a> {
#[ink(topic)]
from: Option<&'a AccountId>,
#[ink(topic)]
to: Option<&'a AccountId>,
value: &'a Balance,
}
Self::env().emit_event(&Transfer {
from: None,
to: Some(&caller),
value: &initial_supply,
});
A promising experiment: I managed to get
7.9K
(down from11.0K
) from the latestmaster
with making everything a reference:/// Event emitted when a token transfer occurs. #[ink(event)] pub struct Transfer<'a> { #[ink(topic)] from: Option<&'a AccountId>, #[ink(topic)] to: Option<&'a AccountId>, value: &'a Balance, }
Self::env().emit_event(&Transfer { from: None, to: Some(&caller), value: &initial_supply, });
Did this just work out of the box or were codegen and ink! IR adjustments needed?
No, I needed to remove the Decode
requirements and add a bunch of lifetimes everywhere: I'm sure they could be reduced but I wanted just to prove the possible gains.
A promising experiment: I managed to get 8.0K (down from 11.0K) from the latest master with making everything a reference:
This is unfortunately wrong. The correct number is 10.3K
down from 10.6
(current master
). So in fact marginal gains for that approach so far.
There is however some potential for optimizing the footprint of topics codegen, which requires further investigation. For example, each new type of topic adds ~0.5K
to the code size - partly because of the extra Encode
impl, but possibly because of monomorphization for the Encode
impl in push_topics
.