erigon
erigon copied to clipboard
Caplin: support backfilling blobs through a remote beacon API.
Adds support for backfilling historical blob data through a remote Beacon API as an alternative to P2P synchronization.
Implementation
- BeaconApiBlobDownloader: Fetches data column sidecars via Beacon API (
/eth/v1/beacon/data_column_sidecars/{slot}) using SSZ encoding, stores columns, and triggers blob recovery via PeerDAS when sufficient columns are available - BlobHistoryDownloader: Refactored P2P-based blob backfiller into standalone component implementing common
BlobBackfillerinterface - Automatic selection: Stages layer selects Beacon API downloader when
--caplin.blobs-backfiller-urlis set, otherwise falls back to P2P
Configuration
# Use remote Beacon API for blob backfilling
erigon --caplin.blobs-backfiller-url=http://beacon-node:5052 --caplin.blobs-archive
# Alternative: immediate backfill of recent 4096 epochs
erigon --caplin.blobs-immediate-backfill
# Disable blob pruning for archival
erigon --caplin.blobs-no-pruning
Data Column Integration
- PeerDAS manages data column storage and blob recovery
- When ≥50% of columns are available for a block, schedules blob reconstruction
- Supports both direct blob download and column-based recovery paths
Backfill Strategy
Both downloaders work backwards from current slot to Deneb fork epoch, skipping slots where:
- Blobs already exist in storage
- Block is pre-Deneb or has no blob commitments
- Slot is covered by frozen snapshots
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.
@copilot please make a description of the PR and post it as a comment, I am too lazy to write it myself
@Giulio2002 I've opened a new pull request, #18344, to work on those changes. Once the pull request is ready, I'll request review from you.