polkadot icon indicating copy to clipboard operation
polkadot copied to clipboard

Back and include multiple blocks at once for a single availability core

Open rphmeier opened this issue 3 years ago • 2 comments

Right now, cores only allow for one parachain block to be backed & made available at a time. We could increase the throughput of the network by increasing this. Coupled with asynchronous backing (#3779) this will allow parachains to produce blocks at an effective rate that is faster than the relay-chain block rate.

We can't allow unlimited amounts of parachain blocks to be posted to the relay-chain at once because approval-voting only starts after inclusion. Putting heavy bursty load on finality is something we'd like to avoid; we want load to be balanced over time.

I propose the introduction of new configuration parameters to the relay-chain runtime, which is based around the idea of "candidate-submission points" earned by groups over time:

struct HostConfiguration {
    // ... other parameters omitted

    // The maximum amount of candidate-submission points that can be earned over the course of a session.
    max_candidate_submission_points: u32,

    // The maximum amount of candidate submission points that can be spent at once by a group.
    max_candidate_submission_point_spend: u32,

    // The amount of candidate-submission points earned by each group every block.
    candidate_submission_point_earn_rate: u32,

    // The amount of candidate submission points spent per candidate backed.
    candidate_submission_point_cost: u32,
}

The way this will work is that at the beginning of the session, each validator group will have its "candidate submission points" reset to 0. At the beginning of each relay-chain block, each validator group has its points incremented by candidate_submission_point_earn_rate, maxing out at max_candidate_submission_points. For every backed candidate submitted to the chain, the validator group has candidate_submission_point_cost points deducted from their point balance. It is illegal for this value to exceed the group's point balance or the max_candidate_submission_point_spend value.

This system has a few useful properties:

  • Validator groups can submit more than 1 backed candidate per block.
  • Validators can't ever submit more than a maximum amount of candidates per block, which gives an effective upper bound on burst load
  • Each validator group has a maximum allotment of points which places a bound on the benefits of saving up points, which encourages spending of points throughout the session.
  • Validators will still earn extra rewards for candidates that they back and make available, giving incentives for validators to spend submission points.
  • Validators don't earn rewards for candidates that they don't make available, giving incentives for validators to not spend submission points on anything they don't have the bandwidth to make available quickly.

Governance can update the earn rate and maximums slowly over time based on the performance of the network (avg. utilization, time to finality, PoV size) and therefore gain an effective controller over the throughput of the network.

rphmeier avatar Feb 21 '22 03:02 rphmeier

As an example of how this might be useful:

Let's say a specific parachain is set up so producing, gossiping, and importing a block takes 4s but the relay-chain block time is still 6s. Then over a 24s period there could be 6 parachain blocks and 4 relay-chain blocks. In the status quo this situation can't be handled effectively and the parachain would have to slow down its production and leave 50% of its throughput on the table. With this proposal, the relay-chain could back/include 2 blocks for the parachain at once in a single relay-chain block and keep up with the block production rate of the parachain.

Another example:

Parathreads really benefit from this because the production of blocks from multiple parathreads is totally independent (unlike parachains where the blocks have a linear dependence). This means that a validator group assigned to a parathread core with some points saved up can spend them to increase parathread throughput substantially.

rphmeier avatar Feb 21 '22 03:02 rphmeier

This issue has been mentioned on Polkadot Forum. There might be relevant details there:

https://forum.polkadot.network/t/parachain-scaling-by-parablock-splitting/341/3

Polkadot-Forum avatar Sep 12 '22 15:09 Polkadot-Forum