`try-runtime::fast-forward`
We introduce a new subcommand for try-runtime tool: fast-forward.
Functionality description
- We build
RemoteExternalitiesfrom either a live chain or a local snapshot (try-runtimefolklore) - In a loop (either for
nblocks or untilCtrl+C) we produce an empty block and execute it atop of the local state
By empty block we mean a block with only pre-runtime digest (like slot for aura) and inherent extrinsics (like timestamp::set). Notice, that we don't respect the standard blocktime / slot-duration, but instead we are proceeding immediately with subsequent block.
Also, observe that effectively we are simulating (testing) two processes separately:
- block production (
"Core_initialize_block","BlockBuilder_inherent_extrinsics","BlockBuilder_apply_extrinsic","BlockBuilder_finalize_block") - block execution (
"TryRuntime_execute_block")
Usage
As pre-runtime digest and inherents are chain-specific and are crucial for block correctness, every chain must provide its own way of producing these. Please consult node-template for the example implementation.
Example output
piomiko:~/Desktop/substrate$ ./target/release/node-template try-runtime --runtime existing fast-forward --n-blocks 2 live --uri ws://127.0.0.1:9944
2022-12-09 21:48:11 Connection established to target: Target { sockaddrs: [], host: "127.0.0.1", host_header: "127.0.0.1:9944", _mode: Plain, path_and_query: "/" }
2022-12-09 21:48:11 since no at is provided, setting it to latest finalized head, 0xc9f96f3d2fd40cf4b7de01c768cd4349cff7df65a9a425ec557fcc0896651c2a
2022-12-09 21:48:11 since no prefix is filtered, the data for all pallets will be downloaded
2022-12-09 21:48:11 scraping key-pairs from remote at block height 0xc9f96f3d2fd40cf4b7de01c768cd4349cff7df65a9a425ec557fcc0896651c2a
2022-12-09 21:48:11 Querying a total of 365 keys from prefix , splitting among 20 threads, 19 keys per thread
2022-12-09 21:48:11 adding data for hashed prefix: , took 0s
2022-12-09 21:48:11 adding data for hashed key: 3a636f6465
2022-12-09 21:48:11 adding data for hashed key: 26aa394eea5630e07c48ae0c9558cef7f9cce9c888469bb1a0dceaa129672ef8
2022-12-09 21:48:11 adding data for hashed key: 26aa394eea5630e07c48ae0c9558cef702a5c1b19ab7a04f536c519aca4983ac
2022-12-09 21:48:11 Custom("[backend]: frontend dropped; terminate client")
2022-12-09 21:48:11 initialized state externalities with storage root 0x9122dc8e4a24afbe80612625cba6685ddd00f4f27fabb57a7c64ccdb2c518e48 and state_version V1
2022-12-09 21:48:11 Connection established to target: Target { sockaddrs: [], host: "127.0.0.1", host_header: "127.0.0.1:9944", _mode: Plain, path_and_query: "/" }
2022-12-09 21:48:11 Custom("[backend]: frontend dropped; terminate client")
2022-12-09 21:48:11 Producing new empty block at height 332
2022-12-09 21:48:11 Produced a new block: Header { parent_hash: 0xc9f96f3d2fd40cf4b7de01c768cd4349cff7df65a9a425ec557fcc0896651c2a, number: 332, state_root: 0xd2c79d79468618b57ea330c5c5328b9f9862b3bc695040f25cb4828e26f470de, extrinsics_root: 0xcbcbbd0025d0ff19ce642b0a7a3bd3f48a63cb5140dfa58e169b77ac979ce9dd, digest: Digest { logs: [DigestItem::PreRuntime([97, 117, 114, 97], [129, 154, 152, 16, 0, 0, 0, 0])] } }
2022-12-09 21:48:11 Executed the new block
2022-12-09 21:48:11 Producing new empty block at height 333
2022-12-09 21:48:12 Produced a new block: Header { parent_hash: 0x7a8bbca9d11633bd05897b7b0d75c4d82a62b7e09b4f8e3dd46236e7d03a1e8f, number: 333, state_root: 0xd228a9d592794c4f0718c458d40eb83d4ec4c1466cfe5093d98d67cd98cb69b4, extrinsics_root: 0xd943453d4f8c4f364fa680aded640ad29dab8c2b4283879795cc9f64969f3877, digest: Digest { logs: [DigestItem::PreRuntime([97, 117, 114, 97], [130, 154, 152, 16, 0, 0, 0, 0])] } }
2022-12-09 21:48:12 Executed the new block
cc: @kianenigma
polkadot companion: https://github.com/paritytech/polkadot/pull/6567 cumulus companion: https://github.com/paritytech/cumulus/pull/2100
Polkadot address: 15fdtPm7jNN9VZk5LokKcmBVfmYZkCXdjPpaPVL2w7WgCgRY
Once revived, I should remember to delete https://github.com/paritytech/substrate/pull/12537 :)
Notice, that we don't respect the standard blocktime / slot-duration, but instead we are proceeding immediately with subsequent block.
Won't the runtime panic if we don't set the timestamp correctly?
Notice, that we don't respect the standard blocktime / slot-duration, but instead we are proceeding immediately with subsequent block.
Won't the runtime panic if we don't set the timestamp correctly?
Pallet timestamp only requires that subsequent timestamps are strictly increasing. However, pallet aura has a problematic hook:
fn on_timestamp_set(moment: T::Moment) {
let slot_duration = Self::slot_duration();
assert!(!slot_duration.is_zero(), "Aura slot duration cannot be zero.");
let timestamp_slot = moment / slot_duration;
let timestamp_slot = Slot::from(timestamp_slot.saturated_into::<u64>());
assert!(
CurrentSlot::<T>::get() == timestamp_slot,
"Timestamp slot must match `CurrentSlot`"
);
}
which will fail if the timestamp is inconsistent with slot duration. Therefore, in fast-forward we cannot use actual timestamps.
Therefore, in fast-forward we cannot use actual timestamps
I dont quite understand the problem. Why can you not multiple the block number with the block time? Like 6s normally?
The timestamp will obviously be in the future, but that is correct since the block is also in the future.
I dont quite understand the problem. Why can you not multiple the block number with the block time? Like 6s normally? The timestamp will obviously be in the future, but that is correct since the block is also in the future.
Effectively I'm doing this here, right?:
let timestamp_idp = match maybe_prev_info {
Some((inherent_data, _)) => sp_timestamp::InherentDataProvider::new(
inherent_data.timestamp_inherent_data().unwrap().unwrap() +
BLOCKTIME_MILLIS,
),
None => sp_timestamp::InherentDataProvider::from_system_time(),
};
And this must be done (calculated) on the client side (i.e. outside try-runtime) - my intention was to make try-runtime agnostic to inherents and pre-runtime digests. Thus it's client, that must provide this data via such callback.
Another thing is that I have BLOCKTIME_MILLIS hardcoded (as const BLOCKTIME_MILLIS: u64 = 2 * 3_000;), which must be aligned to the chain. I cannot read anything from runtime configuration at this point (since runtime resolution is done on the try-runtime site). But imho this is acceptable.
And this must be done (calculated) on the client side (i.e. outside try-runtime) - my intention was to make try-runtime agnostic to inherents and pre-runtime digests. Thus it's client, that must provide this data via such callback.
It seems like this is a misunderstanding. Everything in the try-runtime-cli crate is also "client side code", and has full access to system time in order to build the inherent and all. Only things within state_machine_call are actually runtime calls.
Also, can you provide a Polkadot address for a tip?
/tip large
@kianenigma A large tip was successfully submitted for pmikolajczyk41 (15fdtPm7jNN9VZk5LokKcmBVfmYZkCXdjPpaPVL2w7WgCgRY on polkadot).
https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc.polkadot.io#/treasury/tips 
@sam0x17 @Ank4n if you approve here, please take a look at the companions as well :)
bot rebase
Error: Command 'Command { std: "git" "push" "--porcelain" "Cardinal-Cryptography" "piomiko/try-runtime/forward", kill_on_drop: false }' failed with status Some(128); output: remote: Permission to Cardinal-Cryptography/substrate.git denied to paritytech-processbot[bot]. fatal: unable to access 'https://x-access-token:${SECRET}@github.com/Cardinal-Cryptography/substrate.git/': The requested URL returned error: 403
Should be good to go after a rebase 👍
bot merge
Error: Github API says https://github.com/paritytech/polkadot/pull/6567 is not mergeable
bot merge
Error: "Check reviews" status is not passing for https://github.com/paritytech/cumulus/pull/2100
bot merge
This pull request has been mentioned on Polkadot Forum. There might be relevant details there:
https://forum.polkadot.network/t/polkadot-release-analysis-v0-9-39/2277/1