fuel-core
fuel-core copied to clipboard
Update benchmarks for storage related opcode
The source: https://github.com/FuelLabs/fuel-core/pull/1230#pullrequestreview-1512624927
Based on the benchmarks from https://github.com/FuelLabs/fuel-core/pull/1230, it is clear that SMT and database with many entries slowdowns storage interaction significantly.
We need to update our benchmarks for opcodes: SCWQ
, SWW
, SWWQ
, TR
, TRO
, CALL
, SMO
, BURN
, MINT
.
These opcodes modify (insert or remove) state/balances and update SMT. The gas cost should include the worst scenario when the contract has 100_000_000
entries in the storage.
The opcodes like CALL
, TR
, and TRO
updates the SMT only in the contract context, so we can optimize the gas charging. But it will make our gas cost model more complex, so it is okay for now to charge more than we need. We can always optimize it later.
The opcodes like CALL, TR, and TRO updates the SMT only in the contract context, so we can optimize the gas charging. But it will make our gas cost model more complex, so it is okay for now to charge more than we need. We can always optimize it later.
I think only TRO (in the external context) would be optimized here, since CALL and TR must always interact with contract balances.
Yeah, but in some cases, we interact with the storage only once, in some two times=) We can optimize it to charge per interaction during execution instead of one time at the beginning
The gas cost should include the worst scenario when the contract has 100_000_000 entries in the storage.
@xgreenx Any ideas how to get a db with 100_000_000
contract entries quickly? A quick benchmark implies that it about six hours to fill the database, and that's assuming there's no slowdown when there are more entries which is unlikely. Currently I'm just doing something like this:
fn fill_contract_storage(db: &mut VmDatabase, contract: &ContractId) -> io::Result<()> {
for k in 0u64..100_000_000 {
let mut key = Bytes32::zeroed();
key.as_mut()[..8].copy_from_slice(&k.to_be_bytes());
db.merkle_contract_state_insert(contract, &key, &key)?;
}
Ok(())
}
Yeah, it is definitely the slowest way of doing that. You can find a Database::init_contract_state
function that should do it much faster. For balances you can use Database::init_contract_balances
.
If it still takes much time to setup, you can decrease this value to 10M
or 1M
. We can multiply the final value by a constant.
But we need to set up this contract one time and use it in all benchmarks. To remove impact from the RocksDB cache, we can use each iteration random storage key from a pre-defined seed.
Using init_contract_state
it takes 6s to insert 1M values and 92s for 10M values. 100M values gets the process OOM killed on macOS, as I only have 16GiB of RAM.
I'm not sure if we can simply use 10M values and multiply, since the slowdown might not be linear. In any case, I'm going to change the benchmarks to use 10M values first.
10M
already is sufficient enough. We have a logarithmic complexity, so it definitely not multiply by 10
=) Maybe 2
or 3
, but yeah, we can find it later.