twoeths

Results 315 comments of twoeths

a lot of 0 in my log: ``` Nov 21 22:16:31 feat1-lg1k-hzax41-sas beacon_run.sh[3821539]: flagsNewSet: 0 Nov 21 22:16:31 feat1-lg1k-hzax41-sas beacon_run.sh[3821539]: flagsNewSet: 0 Nov 21 22:16:31 feat1-lg1k-hzax41-sas beacon_run.sh[3821539]: flagsNewSet: 0 Nov...

somehow memory and gc increased in this branch looks like it was caused by publishing more DatacolumnSidecars

when verifying a gossip block, we call this function: ```typescript export function getBlockProposerSignatureSet( state: CachedBeaconStateAllForks, signedBlock: SignedBeaconBlock | SignedBlindedBeaconBlock ): ISignatureSet { const {config, epochCtx} = state; const domain =...

this is a blocker of #8619

network thread `gc` time also increased since fulu

[network_thread_hoodi_sas_nov_22.cpuprofile.zip](https://github.com/user-attachments/files/23692051/network_thread_hoodi_sas_nov_22.cpuprofile.zip) `gc` is huge, 13.5% after the debugging session with @wemeetagain some ideas have come up: - use `Buffer.allocUnsafe` for snappyjs, increase poolSize to some numbers, 10MB was too big...

the current design of RangeSync post-fulu is for single fork only: - for each `PartialDownload` we maintain a single array of `BlockInput` - then we cache all BlockInputs and some...

in this case batch with startEpoch=7 was downloaded before batches with startEpoch = 5 and startEpoch = 6 we should only process a batch if we processed the parent block...

```typescript export function getNextBatchToProcess(batches: Batch[]): Batch | null { for (const batch of batches) { switch (batch.state.status) { // If an AwaitingProcessing batch exists it can only be preceeded by...

the 0 block batch happens with a lot of clients - lighthouse ``` verbose: Batch download error id=Finalized-2, startEpoch=0, status=Downloading, peer=16...BQ8fLQ - peerId=16Uiu2HAmFsDPzqKFfMuiDTBiRBNALSao7RF2p56TaD4FneBQ8fLQ peerClient=Lighthouse-cgc:4 returns no blocks or allBlobSidecars, blocks=32,...