PeerDAS: super node should recover all sidecars after the first 64
Describe the bug
the key thing of PeerDAS is we don't need to wait for all 128 data_column_sidecars to make sure data is available, instead we should only expect 64 of them
this causes the node to be out of synced as seen on #7931
Expected behavior
for supernode, after receiving the 1st 64 data_column_sidecars, recover all data then if it's a success, all data is available and we can proceed to next block without waiting for the remaining 64 data_column_sidecars
Steps to reproduce
No response
Additional context
No response
Operating system
Linux
Lodestar version or commit hash
peerDAS
given this slot_84923.txt in a stable node, it takes ~8s to mark block data as available
after receiving the 1st 64 data_column_sidecars, the node should mark block data as fully available by using c-kzg to recover all data. Then total time is ~4.7s plus <100ms for ckzg.recoverCellsAndKzgProofs()
it turns out it takes up to 4s to recover all blobs
it makes sense to offload this computation to a worker thread and do this right after we receive 64 data_column_sidecars
this should be resolved already? @twoeths
closing as resolved I think for supernodes, during syncing we can also recover columns but does not seem a priority as we will soon be syncing stably