Add max chunking limit for storage proof fetching
Add max chunking limit for storage proof fetching
Problem
State root computation has significant outliers on Base where time spikes from ~20% average to 80%+ of block processing, taking 200-400ms+. OTLP traces show these are caused by single storage proof tasks with one worker for 100-400ms while others sit idle. (Usually Xen)
Root cause: Current chunking only happens when available_workers > 1 (multiproof.rs:774). When all workers are busy (common mid-block), no chunking occurs → single task with 500+ targets blocks one worker for 400ms.
Proposal
Add max chunk size constant that forces chunking even when workers are busy:
const MAX_TARGETS_PER_CHUNK: usize = 50; // TBD based on research
let should_chunk =
available_workers > 1 ||
proof_targets.chunking_length() > MAX_TARGETS_PER_CHUNK;
This allows us to bound the time an individual proof fetch would take
Research Needed to Determine MAX_TARGETS_PER_CHUNK value:
- Target: ≤10-20ms per proof task
- Method: Use PR #19590's
target_counttracing to correlate target count → fetch time for XEN - We can measure time spent on fetching trie nodes from the DB for a single multiproof request, and add it to the span as a field
Additional Context:
On base:
- 90% of proof fetches are non-XEN and resolve in a few hundred microseconds or less
- 9% of proof fetches are XEN and take 1-6ms to fetch
- 1% of proof fetches are XEN and take 100ms+ to fetch
Can I ?
Can I ?
Hey @0xKarl98 this requires us to investigate and look at tracing spans. Might not be the most straightforward in terms of setup for contributors
Yeah understood , @mattsse has assigned me another one