lodestar icon indicating copy to clipboard operation
lodestar copied to clipboard

PeerDAS: super node should recover all sidecars after the first 64

Open twoeths opened this issue 7 months ago • 2 comments

Describe the bug

the key thing of PeerDAS is we don't need to wait for all 128 data_column_sidecars to make sure data is available, instead we should only expect 64 of them

this causes the node to be out of synced as seen on #7931

Expected behavior

for supernode, after receiving the 1st 64 data_column_sidecars, recover all data then if it's a success, all data is available and we can proceed to next block without waiting for the remaining 64 data_column_sidecars

Steps to reproduce

No response

Additional context

No response

Operating system

Linux

Lodestar version or commit hash

peerDAS

twoeths avatar Jun 07 '25 08:06 twoeths

given this slot_84923.txt in a stable node, it takes ~8s to mark block data as available

after receiving the 1st 64 data_column_sidecars, the node should mark block data as fully available by using c-kzg to recover all data. Then total time is ~4.7s plus <100ms for ckzg.recoverCellsAndKzgProofs()

twoeths avatar Jun 07 '25 08:06 twoeths

it turns out it takes up to 4s to recover all blobs

Image

it makes sense to offload this computation to a worker thread and do this right after we receive 64 data_column_sidecars

twoeths avatar Jun 08 '25 06:06 twoeths

this should be resolved already? @twoeths

nflaig avatar Aug 07 '25 17:08 nflaig

closing as resolved I think for supernodes, during syncing we can also recover columns but does not seem a priority as we will soon be syncing stably

twoeths avatar Aug 08 '25 01:08 twoeths