lighthouse
lighthouse copied to clipboard
Make max_blobs_per_block a config parameter
Issue Addressed
N/A
Proposed Changes
Change max_blobs_per_block from a preset value to a config value. This affects a lot of the codebase where we have to use runtime variants of ssz List and Vector.
I've just added the 6.0.0 label to this - I think it would be great if we can release this with the getBlobsV1 changes, as we're keen to get some data on blob count increase with the optimisation. Ideally this get merged relatively soon given the amount of files touched, however this is a nice to have, so we can drop this in favour of higher priority issues.
We also need to move MAX_BLOBS_PER_BLOCK from presets to config YAML files.
I think we need this for Electra spec v1.5.0-alpha.10, which introduces MAX_BLOBS_PER_BLOCK_ELECTRA as a config parameter for Electra.
I'm going to resolve conflicts on this to see if we can merge it to unstable before the alpha10 spec PRs.
I think this is ready for re-review. I got all the tests passing.
@michaelsproul Made a shot at removing empty_uninitialized in 0cd263f.
The only place where we required empty_uninitialized was when we were fetching blobs from the db by root. In the cases where there were no blobs for a given root, we did not know what max_len to provide to the runtime list.
Got around this by defining a new enum that encapsulates that case. PTAL.
Fixing the tests now.
I ended up deleting the SSZ implementation, because we don't actually need it! I thought we'd use it for HTTP responses, but we know the max_len in this case and can construct an empty BlobSidecarList. So I went ahead and removed the places where we were storing BlobSidecarListFromRoot and just restored them to BlobSidecarList.
I'm not too worried about the stuff in do_atomically_with_block_and_blobs, because we will delete that as part of:
- https://github.com/sigp/lighthouse/issues/6559
Nice, this looks much cleaner.
We just found a bug with this: https://github.com/sigp/lighthouse/blob/04b3743ec1e0b650269dd8e58b540c02430d1c0d/beacon_node/lighthouse_network/src/rpc/methods.rs#L336-L345
This function is used in a validation here: https://github.com/sigp/lighthouse/blob/38388979db0bd672103984b017468b76087ce122/beacon_node/network/src/network_beacon_processor/rpc_methods.rs#L893-L894
and this will always fail if someone sends a range request with more than max_request_blob_sidecars sidecars or 768 / 16 = 48 blocks.
We need to update the validation here to use the spec max_blob_per_block config instead of using a ceiling value.
@jimmygchen Nice catch. I moved the check from the rpc_methods to the decoding of the request.
Also threaded the chainspec and current_fork to the max_responses function. Its not ideal but it isn't too bad imo because it does make sense that the max_response size depends on the current_fork.
@jxs had to add the ForkContext into the rate limiter in 70917f7 .I did not like this initially but I feel that's the best way to avoid the ugliness with keeping different max values in sync. Having the fork context in the rate limiter doesn't seem ugly because rate limiting is also fork dependent. Happy to explore a different solution if you feel otherwise.
@mergify queue
queue
✅ The pull request has been merged automatically
The pull request has been merged automatically at 05727290fbef27df72e854de0ca9280ee6347101