twoeths
twoeths
the code change looks good to me, I didn't review the dashboard json but the appearance looks great
> @twoeths Can this be closed via #380? @philknows not yet since I merged it to the `te/batch_hash_tree_root` branch. Will close it once we merge to `master`
given this [slot_84923.txt](https://github.com/user-attachments/files/20637741/slot_84923.txt) in a stable node, it takes ~8s to mark block data as available after receiving the 1st 64 data_column_sidecars, the node should mark block data as fully...
it turns out it takes up to 4s to recover all blobs it makes sense to offload this computation to a worker thread and do this right after we receive...
closing as resolved I think for supernodes, during syncing we can also recover columns but does not seem a priority as we will soon be syncing stably
yes this happens after we process the 1st block of an epoch where `stateRoot` is always `ZERO_HASH`. I don't remember in which cases we use this state, if yes we...
Right now we store 2 checkpoint states per epoch: - One is with `${last block root of previous epoch : current epoch}`, this is non-spec and to save us time...
also probably we need to change the condition for the spec checkpoint above to `if (blockEpoch > parentEpoch)`, will address in #5968
yes this was implemented at the time when we didn't have `vc` metrics, now `vc` metrics don't show any apis that may cause the lag, I think we can remove...
@nflaig would be great to fix broken "Lodestar - block production" panel "Success + Error" rates, labels are duplicated