joystream
joystream copied to clipboard
Carthage Updated Weights
New weights generated (from commit 674974ab35a0fa8e3758f68414c096be8bf80ee2) , switched from Amazon machine to Google Cloud (see https://github.com/Joystream/joystream/issues/4369)
The block execution weight has changed so there is a test failing now:
---- constants::tests::block_weight_fill_cost_per_day_is_correct stdout ----
weight per block: 1500000000000, block cost: 185¢, cost/day: $26703
thread 'constants::tests::block_weight_fill_cost_per_day_is_correct' panicked at 'assertion failed: day_of_full_blocks_cost >= BLOCK_WEIGHT_FILL_MIN_DAILY_COST', runtime/src/constants.rs:209:9
// before
pub const BlockExecutionWeight: Weight = 4_870_839 * WEIGHT_PER_NANOS;
// new weight
pub const BlockExecutionWeight: Weight = 8_056_202 * WEIGHT_PER_NANOS;
// before
pub const ExtrinsicBaseWeight: Weight = 70_231 * WEIGHT_PER_NANOS;
// new weight
pub const ExtrinsicBaseWeight: Weight = 161_778 * WEIGHT_PER_NANOS;
How flexible are we to change the expected block fees here, or should we run a second time to figure why the block weight changed so much? https://github.com/Joystream/joystream/blob/64f13d24d039b71fdbff77fdf44f2d8818b46d20/runtime/src/constants.rs#L190
This is not final, will need to run again as there are some updates on carthage still not included here.
┆Issue is synchronized with this Asana task by Unito
The latest updates on your projects. Learn more about Vercel for Git ↗︎
1 Ignored Deployment
Name | Status | Preview | Updated |
---|---|---|---|
pioneer-testnet | ⬜️ Ignored (Inspect) | Nov 2, 2022 at 5:01PM (UTC) |
After discussing with @Lezek123 , we decided to try weight generation again with recommended optimization for the node, namely disabling SMT (hyper-threading) and using NVMe interface for disk access. I configured two local SSDs on google instance and combined into single disk to get better performance. Generating the weights now, will update the PR once ready.
After discussing with @Lezek123 , we decided to try weight generation again with recommended optimization for the node, namely disabling SMT (hyper-threading) and using NVMe interface for disk access. I configured two local SSDs on google instance and combined into single disk to get better performance. Generating the weights now, will update the PR once ready.
Happy to report new generated weights do not break our fee range assumptions.
@Lezek123 The issues you pointed out re weight functions in the content pallet, could it be caused by a problem related to this:
failures:
---- benchmarks::benchmarking::tests::buy_nft stdout ----
thread 'benchmarks::benchmarking::tests::buy_nft' panicked at 'attempt to shift left with overflow', runtime-modules/content/src/benchmarks/mod.rs:225:13
---- benchmarks::benchmarking::tests::accept_channel_transfer_member_to_member stdout ----
thread 'benchmarks::benchmarking::tests::accept_channel_transfer_member_to_member' panicked at 'attempt to shift left with overflow', runtime-modules/content/src/benchmarks/mod.rs:225:13
---- benchmarks::benchmarking::tests::accept_channel_transfer_member_to_curator stdout ----
thread 'benchmarks::benchmarking::tests::accept_channel_transfer_member_to_curator' panicked at 'attempt to shift left with overflow', runtime-modules/content/src/benchmarks/mod.rs:225:13
The benchmark tests fail when run in debug build: cargo test --features runtime-benchmarks
failures:
benchmarks::benchmarking::tests::accept_channel_transfer_curator_to_curator
benchmarks::benchmarking::tests::accept_channel_transfer_member_to_curator
benchmarks::benchmarking::tests::accept_channel_transfer_member_to_member
benchmarks::benchmarking::tests::accept_incoming_offer
benchmarks::benchmarking::tests::add_curator_to_group
benchmarks::benchmarking::tests::buy_nft
benchmarks::benchmarking::tests::cancel_buy_now
benchmarks::benchmarking::tests::cancel_channel_transfer
benchmarks::benchmarking::tests::cancel_english_auction
benchmarks::benchmarking::tests::cancel_offer
benchmarks::benchmarking::tests::cancel_open_auction
benchmarks::benchmarking::tests::cancel_open_auction_bid
benchmarks::benchmarking::tests::channel_agent_remark
benchmarks::benchmarking::tests::channel_owner_remark
benchmarks::benchmarking::tests::channel_update_with_assets
benchmarks::benchmarking::tests::channel_update_without_assets
benchmarks::benchmarking::tests::claim_channel_and_withdraw_curator_channel_reward
benchmarks::benchmarking::tests::claim_channel_and_withdraw_member_channel_reward
benchmarks::benchmarking::tests::claim_channel_reward
benchmarks::benchmarking::tests::claim_creator_token_patronage_credit
benchmarks::benchmarking::tests::create_channel
benchmarks::benchmarking::tests::create_curator_group
benchmarks::benchmarking::tests::create_video_with_nft
benchmarks::benchmarking::tests::create_video_without_nft
benchmarks::benchmarking::tests::creator_token_issuer_transfer
benchmarks::benchmarking::tests::deissue_creator_token
benchmarks::benchmarking::tests::delete_channel
benchmarks::benchmarking::tests::delete_channel_as_moderator
benchmarks::benchmarking::tests::delete_channel_assets_as_moderator
benchmarks::benchmarking::tests::delete_video_as_moderator_with_assets
benchmarks::benchmarking::tests::delete_video_as_moderator_without_assets
benchmarks::benchmarking::tests::delete_video_assets_as_moderator
benchmarks::benchmarking::tests::delete_video_with_assets
benchmarks::benchmarking::tests::delete_video_without_assets
benchmarks::benchmarking::tests::destroy_nft
benchmarks::benchmarking::tests::finalize_creator_token_sale
benchmarks::benchmarking::tests::finalize_revenue_split
benchmarks::benchmarking::tests::init_creator_token_sale
benchmarks::benchmarking::tests::initialize_channel_transfer
benchmarks::benchmarking::tests::issue_creator_token
benchmarks::benchmarking::tests::issue_nft
benchmarks::benchmarking::tests::issue_revenue_split
benchmarks::benchmarking::tests::issue_revenue_split_as_collaborator
benchmarks::benchmarking::tests::make_creator_token_permissionless
benchmarks::benchmarking::tests::make_english_auction_bid
benchmarks::benchmarking::tests::make_open_auction_bid
benchmarks::benchmarking::tests::nft_owner_remark
benchmarks::benchmarking::tests::offer_nft
benchmarks::benchmarking::tests::pick_open_auction_winner
benchmarks::benchmarking::tests::reduce_creator_token_patronage_rate_to
benchmarks::benchmarking::tests::remove_curator_from_group
benchmarks::benchmarking::tests::sell_nft
benchmarks::benchmarking::tests::set_channel_paused_features_as_moderator
benchmarks::benchmarking::tests::set_channel_visibility_as_moderator
benchmarks::benchmarking::tests::set_curator_group_status
benchmarks::benchmarking::tests::set_video_visibility_as_moderator
benchmarks::benchmarking::tests::settle_english_auction
benchmarks::benchmarking::tests::sling_nft_back
benchmarks::benchmarking::tests::start_english_auction
benchmarks::benchmarking::tests::start_open_auction
benchmarks::benchmarking::tests::update_buy_now_price
benchmarks::benchmarking::tests::update_channel_nft_limit
benchmarks::benchmarking::tests::update_channel_payouts
benchmarks::benchmarking::tests::update_channel_privilege_level
benchmarks::benchmarking::tests::update_channel_state_bloat_bond
benchmarks::benchmarking::tests::update_curator_group_permissions
benchmarks::benchmarking::tests::update_upcoming_creator_token_sale
benchmarks::benchmarking::tests::update_video_state_bloat_bond
benchmarks::benchmarking::tests::update_video_with_assets_with_nft
benchmarks::benchmarking::tests::update_video_with_assets_without_nft
benchmarks::benchmarking::tests::update_video_without_assets_with_nft
benchmarks::benchmarking::tests::update_video_without_assets_without_nft
benchmarks::benchmarking::tests::withdraw_from_curator_channel_balance
benchmarks::benchmarking::tests::withdraw_from_member_channel_balance
The problematic code:
fn get_byte(num: u64, byte_number: u8) -> u8 {
((num & (0xff << (8 * byte_number))) >> (8 * byte_number)) as u8
}
- Number of db reads/writes changed unexpectedly, which means benchmark results may depend on genesis config parameters (
content.issue_creator_token
)
This should be easy to resolve, as we discussed briefly earlier we can run the benchmarks for the production chain. (Will need a small change to node to allow a chain similar to dev
, eg. joystream-node --chain benchmarks
) But of course it would be best to fix the benchmarks so they are not dependant on the initial values at genesis?
In the most recent weight generations I was running benchmar command and passing --base-path
arg to select volume through NVMe, is it possible between multiple runs there was some state left over that affects the results? I know for sure when you run the dev chain with joystream-node --dev
the chain directory is always cleared before the node starts. So maybe it is not relevant.
Follow-up w.r.t. cases like channel_update_with_assets
, where some parameters were omitted:
After a few more local tests I reached following conclusions:
- a margin of error of a few percent of worst-case-scenario extrinsic parameters seems almost unavoidable, I guess there is not much we can do about it other than update to a newer version of Substrate where the benchmarking framework may be possibly improved
- I was able to get most consistent results after lowering steps/repeats to
10/5
and adding--external-repeat
between 10 and 20. When--external-repeat x
flag is used, a newStateMachine
instance is created for each of thosex
repeats. Then inside theStateMachine
executor there is--repeat
number of "internal" repeats. This happens for each step, so with parameters like:--external-repeat 10 --steps 10 --repeat 5
, we get something like: Warning, pseudocode ahead!
for step in 1..10 {
// ...
for external_repeat in 1..10 {
StateMachine::create({ internal_repeats: 5, ... }).execute()
// ...
}
}
Forum and Membership pallets updated benchmarks acebda3 following fixes from https://github.com/Joystream/joystream/pull/4408
Re-ran benchmarks for election_provider_multi_phase, d
component is now used again in feasibility_check
but for submit_unsigned
the t
component was dropped. 0073cff
Honestly it doesn't seem worth spending more effort on this until we have a more reliable benchmarking framework in place (when we update to new version of substrate?)
Honestly it doesn't seem worth spending more effort on this until we have a more reliable benchmarking framework in place
When you say this, what ongoing problem are you then referring to?
When you say this, what ongoing problem are you then referring to?
Basically running the benchmark fixed one function, but another one now looks wrong in that one of the inputs/parameters is no longer used.
LGTM
Re-ran benchmarks for election_provider_multi_phase, d component is now used again in feasibility_check but for submit_unsigned the t component was dropped. 0073cff
I guess we can just re-use the previous weight function for
submit_unsigned
, which includest
, since they were both generated on the same machine?
Glad you mentioned that. I did suggest it to bedeho, in call earlier in the week, so we will go with that. Updated in a8512f4