optimism
optimism copied to clipboard
failed to fetch blobs:expected 1 sidecars but got 0
Bug Description A clear and concise description of what the bug is. Ever since I upgraded to a new version a month ago (the version that added --l1.beacon), I've been having issues with network sync, the l2 network is no longer syncing, I don't know if there's something wrong with my configuration, it's stuck now .
Steps to Reproduce
- Outline the steps to reproduce the bug.
- Be specific and detailed to help developers replicate the issue. The l1 client I used was erigon and the beacon node was lighthouse. When I upgraded optimization to 1.7.2, the problem of synchronization stagnation occurred.
Expected behavior A clear and concise description of what you expected to happen. I expect it will still sync normally like before the update Environment Information:
- Operating System: Ubuntu 20.04.6 LTS
- Package Version (or commit hash): v1.7.2
Configurations: Command line flags or environment variables you're using. lighthouse:
lighthouse --network sepolia beacon --http --http-address 0.0.0.0 --checkpoint-sync-url=https://beaconstate-sepolia.chainsafe.io --execution-endpoint=http://127.0.0.1:8551 --execution-jwt=/jwt/node_beacon/jwt.hex --http-port 8552 --port 8553
optimism:
L1URL=http://127.0.0.1:8545
L1BEACONURL=http://127.0.0.1:8552
L1KIND=erigon
NET=op-sepolia
cd /geth/optimism/op-node
./bin/op-node \
--l1=$L1URL \
--l1.rpckind=$L1KIND \
--l1.beacon=$L1BEACONURL \
--l2=http://localhost:8561 \
--l2.jwt-secret=/geth/optimism/op-node/jwt.txt \
--network=$NET \
--network=op-sepolia \
--rpc.addr=0.0.0.0 \
--rpc.port=8547
op-geth:
#! /usr/bin/bash
SEQUENCER_URL=https://sepolia-sequencer.optimism.io/
cd /geth/op-geth
./build/bin/geth \
--op-network=op-sepolia \
--ws \
--ws.port=8556 \
--ws.addr=0.0.0.0 \
--ws.origins="*" \
--http \
--http.port=8555 \
--http.addr=0.0.0.0 \
--http.vhosts="*" \
--http.corsdomain="*" \
--authrpc.addr=localhost \
--authrpc.jwtsecret=/geth/op-geth/jwt.txt \
--authrpc.port=8561 \
--discovery.port=30313 \
--port=30313 \
--authrpc.vhosts="*" \
--verbosity=3 \
--rpc.gascap=500000000 \
--rollup.sequencerhttp=$SEQUENCER_URL \
--nodiscover \
--syncmode=full \
--maxpeers=0 \
--datadir=./data \
--snapshot=false \
--gcmode=archive
Logs: Logs and/or error messages that help illustrate the issue. op-geth:
INFO [03-24|06:08:25.781] Loaded most recent local finalized block number=8,366,372 hash=64ceb3..276313 td=0 age=1mo1d13h
INFO [03-24|06:08:25.784] Initialising Ethereum protocol network=11,155,420 dbversion=8
INFO [03-24|06:08:25.785] Loaded local transaction journal transactions=0 dropped=0
INFO [03-24|06:08:25.785] Regenerated local transaction journal transactions=0 accounts=0
INFO [03-24|06:08:25.785] Chain post-merge, sync via beacon client
INFO [03-24|06:08:25.786] Gasprice oracle is ignoring threshold set threshold=2
INFO [03-24|06:08:25.786] Initialized transaction indexer limit=0
WARN [03-24|06:08:25.787] Unclean shutdown detected booted=2024-03-24T03:57:18+0000 age=2h11m7s
WARN [03-24|06:08:25.787] Unclean shutdown detected booted=2024-03-24T05:53:33+0000 age=14m52s
WARN [03-24|06:08:25.789] Engine API enabled protocol=eth
INFO [03-24|06:08:25.789] Starting peer-to-peer node instance=Geth/v1.101308.2-stable-0402d543/linux-amd64/go1.21.6
INFO [03-24|06:08:25.799] IPC endpoint opened url=/geth/op-geth/data/geth.ipc
INFO [03-24|06:08:25.799] Loaded JWT secret file path=/geth/op-geth/jwt.txt crc32=0x6620d394
INFO [03-24|06:08:25.800] HTTP server started endpoint=[::]:8555 auth=false prefix= cors=* vhosts=*
INFO [03-24|06:08:25.800] WebSocket enabled url=ws://[::]:8556
INFO [03-24|06:08:25.800] WebSocket enabled url=ws://127.0.0.1:8561
INFO [03-24|06:08:25.800] HTTP server started endpoint=127.0.0.1:8561 auth=true prefix= cors=localhost vhosts=*
INFO [03-24|06:08:25.801] New local node record seq=1,711,096,038,590 id=ec1461c5a6df4de9 ip=127.0.0.1 udp=0 tcp=30313
INFO [03-24|06:08:25.802] Started P2P networking self="enode://[email protected]:30313?discport=0"
INFO [03-24|07:08:25.788] Regenerated local transaction journal transactions=0 accounts=0
INFO [03-24|08:08:25.787] Regenerated local transaction journal transactions=0 accounts=0
INFO [03-24|09:08:25.788] Regenerated local transaction journal transactions=0 accounts=0
optimism:
t=2024-03-25T03:10:02+0000 lvl=info msg="Optimistically queueing unsafe L2 execution payload" id=0x11c27f0f5b47d8e5c09c60b97cc905d6972126215e70a8b6d8b67f7c9634573f:9766831
t=2024-03-25T03:10:04+0000 lvl=info msg="Received signed execution payload from p2p" id=0x052ff56fb624794a63b49aefea8b4ea596abd52e75a38cc61e511d005738dd8e:9766832 peer=16Uiu2HAmFuhGDyx1PwgLm8gAhDufUcVEP42hAL2EN9VmmHocriZi
t=2024-03-25T03:10:04+0000 lvl=info msg="Optimistically queueing unsafe L2 execution payload" id=0x052ff56fb624794a63b49aefea8b4ea596abd52e75a38cc61e511d005738dd8e:9766832
t=2024-03-25T03:10:06+0000 lvl=info msg="Received signed execution payload from p2p" id=0x0aa2c184e0120033d96860271040f64e607372efd800a0be5e5d75e4a75d40f6:9766833 peer=16Uiu2HAmFuhGDyx1PwgLm8gAhDufUcVEP42hAL2EN9VmmHocriZi
t=2024-03-25T03:10:06+0000 lvl=info msg="Optimistically queueing unsafe L2 execution payload" id=0x0aa2c184e0120033d96860271040f64e607372efd800a0be5e5d75e4a75d40f6:9766833
t=2024-03-25T03:10:07+0000 lvl=warn msg="Derivation process temporary error" attempts=7434 err="engine stage failed: temp: failed to fetch blobs: failed to get blob sidecars for L1BlockRef 0x799e89369a21a862a5ae506841e56b1cd4895f5d7b7fabe66add300a6c1cb315:5335452: expected 1 sidecars but got 0"
t=2024-03-25T03:10:08+0000 lvl=info msg="Received signed execution payload from p2p" id=0x68b89b9c535b0d15aecaf88be92fbe918fe4aafd6c711f8f292625c703b08f99:9766834 peer=16Uiu2HAmFuhGDyx1PwgLm8gAhDufUcVEP42hAL2EN9VmmHocriZi
t=2024-03-25T03:10:08+0000 lvl=info msg="Optimistically queueing unsafe L2 execution payload" id=0x68b89b9c535b0d15aecaf88be92fbe918fe4aafd6c711f8f292625c703b08f99:9766834
t=2024-03-25T03:10:10+0000 lvl=info msg="Received signed execution payload from p2p" id=0xa3e34c0054ed7456a80ad8790da5f6cce605cccc258ef04b57348c3fbfa3aa69:9766835 peer=16Uiu2HAmFuhGDyx1PwgLm8gAhDufUcVEP42hAL2EN9VmmHocriZi
I have the same error. Please advise.
I would be grateful for any input
I have the same problem, my node is trying to sync on optimism mainnet.
Apr 22 12:34:29 my-server bash[42857]: t=2024-04-22T12:34:29+0000 lvl=warn msg="failed to serve p2p sync request" serve=payloads_by_number peer=16Uiu2HAm3-2217 remote=/ip4/103.180.28.215/tcp/9222 req=119083463 err="peer requested unknown block by number: not found"
Apr 22 12:34:29 my-server bash[42857]: t=2024-04-22T12:34:29+0000 lvl=info msg="Received signed execution payload from p2p" id=0xbd0958a2df614ed945e84b8eff181ceefe4cc561f7fc90bcca788301ce9a670e:119095246 peer=16Uiu2HAmTrP55nReSSHC8seJ29DFg1vQKmFXhtdW4GsSkXBPgHSv
Apr 22 12:34:29 my-server bash[42857]: t=2024-04-22T12:34:29+0000 lvl=info msg="Optimistically queueing unsafe L2 execution payload" id=0xbd0958a2df614ed945e84b8eff181ceefe4cc561f7fc90bcca788301ce9a670e:119095246
Apr 22 12:34:29 my-server bash[42857]: t=2024-04-22T12:34:29+0000 lvl=info msg="Dropping payload from payload queue because the payload queue is too large" id=0x6ace165f0b87522a32551d1e8b311a217dae69111c1b3c67d9ab3198e140f326:119054636
Apr 22 12:34:29 my-server bash[42857]: t=2024-04-22T12:34:29+0000 lvl=info msg="Dropping payload from payload queue because the payload queue is too large" id=0x6ef225c362ae4c35a42faae1ad843e6195a97489c84fd0ec695f4211a538992d:119054637
For almost week I have the same block.
Syncing op-sepolia from scratch with v1.7.3 node and v1.101311.0 geth, I've now encountered this, running unpruned Prysm nodes on v5.0.3, stuck at 8366372, running op-node with --l1.beacon-archiver=$OP_NODE__L1_BEACON_ARCHIVE and --l1.beacon=$OP_NODE__L1_BEACON
848314
t=2024-04-22T16:34:45+0000 lvl=warn msg="Derivation process temporary error" attempts=8749 err="engine stage failed: temp: failed to fetch blobs: failed to get blob sidecars for L1BlockRef 0x799e89369a21a862a5ae506841e56b1cd4895f5d7b7fabe66add300a6c1cb315:5335452: expected 1 sidecars but got 0"
t=2024-04-22T16:34:45+0000 lvl=info msg="connected to peer" peer=16Uiu2HAmBwxxZjBESz7xVU6p595znCfddbgdQpXSfLueSu7eBfwp addr=/ip4/80.64.208.80/tcp/50477
t=2024-04-22T16:34:45+0000 lvl=info msg="Starting P2P sync client event loop" peer=16Uiu2HAmBwxxZjBESz7xVU6p595znCfddbgdQpXSfLueSu7eBfwp
t=2024-04-22T16:34:46+0000 lvl=info msg="Received signed execution payload from p2p" id=0xedae620245ddfd90675654cb80418b806169cfb0d2d06953198eaf8677880ac5:11000573 peer=16Uiu2HAmPoBKuQkThkP8kMR7iRayNbvHNVRD34iCJCUYaY9z7h3j
t=2024-04-22T16:34:46+0000 lvl=info msg="Optimistically queueing unsafe L2 execution payload" id=0xedae620245ddfd90675654cb80418b806169cfb0d2d06953198eaf8677880ac5:11000573
t=2024-04-22T16:34:46+0000 lvl=info msg="Dropping payload from payload queue because the payload queue is too large" id=0x812a3c67ad94aeed70aa0281cf5ea3a30318282ab028896cbcfb12f71254788e:10848315
t=2024-04-22T16:34:48+0000 lvl=info msg="Received signed execution payload from p2p" id=0xd340032962ed15e3fbd8385f4ab82a912b63717ccacf6e59ebafb070f3999082:11000574 peer=16Uiu2HAmPoBKuQkThkP8kMR7iRayNbvHNVRD34iCJCUYaY9z7h3j
t=2024-04-22T16:34:48+0000 lvl=info msg="Optimistically queueing unsafe L2 execution payload" id=0xd340032962ed15e3fbd8385f4ab82a912b63717ccacf6e59ebafb070f3999082:11000574
t=2024-04-22T16:34:50+0000 lvl=info msg="Received signed execution payload from p2p" id=0xcc5ff1d6836e15b251a05c3a3f17e254624d8d638668ee95f6504b21a1aae482:11000575 peer=16Uiu2HAmPoBKuQkThkP8kMR7iRayNbvHNVRD34iCJCUYaY9z7h3j
t=2024-04-22T16:34:50+0000 lvl=info msg="Optimistically queueing unsafe L2 execution payload" id=0xcc5ff1d6836e15b251a05c3a3f17e254624d8d638668ee95f6504b21a1aae482:11000575
t=2024-04-22T16:34:50+0000 lvl=info msg="Dropping payload from payload queue because the payload queue is too large" id=0x7212fa89cbbe661de2b84370d07f36eb4c7fb7312a4c5b2a1a6b2dade81dfd83:10848316
t=2024-04-22T16:34:52+0000 lvl=info msg="Received signed execution payload from p2p" id=0x2d8640ce0136c70e466b07eaa8165c024cfd68d1e58cf41c38bed2c7a5d1ad35:11000576 peer=16Uiu2HAmPoBKuQkThkP8kMR7iRayNbvHNVRD34iCJCUYaY9z7h3j
t=2024-04-22T16:34:52+0000 lvl=info msg="Optimistically queueing unsafe L2 execution payload" id=0x2d8640ce0136c70e466b07eaa8165c024cfd68d1e58cf41c38bed2c7a5d1ad35:11000576
t=2024-04-22T16:34:53+0000 lvl=info msg="connected to peer" peer=16Uiu2HAmPxjjB561cMjM6kSkuRAwQr4nhY6KHozyo2KDHzvSxBEf addr=/ip4/222.106.187.48/tcp/49582
t=2024-04-22T16:34:53+0000 lvl=info msg="Starting P2P sync client event loop" peer=16Uiu2HAmPxjjB561cMjM6kSkuRAwQr4nhY6KHozyo2KDHzvSxBEf
t=2024-04-22T16:34:54+0000 lvl=info msg="disconnected from peer" peer=16Uiu2HAmPxjjB561cMjM6kSkuRAwQr4nhY6KHozyo2KDHzvSxBEf addr=/ip4/222.106.187.48/tcp/49582
t=2024-04-22T16:34:54+0000 lvl=info msg="Received signed execution payload from p2p" id=0xcb485f734cdd7a970c0769ed41daf822ac5ac36913fca4051b4df6a4f28fe62f:11000577 peer=16Uiu2HAmPoBKuQkThkP8kMR7iRayNbvHNVRD34iCJCUYaY9z7h3j
t=2024-04-22T16:34:54+0000 lvl=info msg="Optimistically queueing unsafe L2 execution payload" id=0xcb485f734cdd7a970c0769ed41daf822ac5ac36913fca4051b4df6a4f28fe62f:11000577
t=2024-04-22T16:34:54+0000 lvl=info msg="Dropping payload from payload queue because the payload queue is too large" id=0xf807ad98a50744edae53a2b0a1fe49ec635c084639e80cdf1babeea952879b2e:10848317
t=2024-04-22T16:34:54+0000 lvl=info msg="Dropping payload from payload queue because the payload queue is too large" id=0xc2e99d085142c27808e8a8cb07d6366050639ee3f7dd9363c48655e20f40cd51:10848318
t=2024-04-22T16:34:55+0000 lvl=warn msg="Derivation process temporary error" attempts=8750 err="engine stage failed: temp: failed to fetch blobs: failed to get blob sidecars for L1BlockRef 0x799e89369a21a862a5ae506841e56b1cd4895f5d7b7fabe66add300a6c1cb315:5335452: expected 1 sidecars but got 0"
There is no way to do it. Let me take the liberty to say, is there really a team maintaining this chain?
Have the same issue but with op-mainnet
.
t=2024-04-23T16:36:37+0000 lvl=warn msg="Derivation process temporary error" attempts=8595 err="stage 0 failed resetting: temp: failed to find the L2 Heads to start from: failed to fetch current L2 forkchoice state: failed to find the finalized L2 block: failed to parse L1 info deposit tx from L2 block: data is unexpected length: 260"
t=2024-04-23T16:36:37+0000 lvl=info msg="Received signed execution payload from p2p" id=0xb70f0a88ef99d630db1db6c7f7715f403ddc73f98331d63b8e9ba5bb4a8aa434:119145710 peer=16Uiu2HAmJiNges1tqVoFW7g8habSsMwJgqUP6gbL3ofMPvmoTY3d
t=2024-04-23T16:36:37+0000 lvl=info msg="Optimistically queueing unsafe L2 execution payload" id=0xb70f0a88ef99d630db1db6c7f7715f403ddc73f98331d63b8e9ba5bb4a8aa434:119145710
t=2024-04-23T16:36:40+0000 lvl=info msg="Received signed execution payload from p2p" id=0x2d7036730b236076762e9a678396af3202c1072e2731bc0850d9ccb1999236c4:119145711 peer=16Uiu2HAkxfLbtojJACcSDKzdG6QHd8q4TG9MtNCwEhqaneGTCvDw
t=2024-04-23T16:36:40+0000 lvl=info msg="Optimistically queueing unsafe L2 execution payload" id=0x2d7036730b236076762e9a678396af3202c1072e2731bc0850d9ccb1999236c4:119145711
t=2024-04-23T16:36:40+0000 lvl=info msg="Dropping payload from payload queue because the payload queue is too large" id=0x7be94fcf6e7578b2084ca23ff632fdaeb7a7a0f33077b2638d47708526866409:119115374
t=2024-04-23T16:36:40+0000 lvl=info msg="Dropping payload from payload queue because the payload queue is too large" id=0xf4b048ae7655cf4a6a00efb90fed454e7933087b2ebf67115b8df56eb9b084c7:119115375
t=2024-04-23T16:36:40+0000 lvl=info msg="Dropping payload from payload queue because the payload queue is too large" id=0xd6b7e1a40315e78024b2c16d8cf8a6ee38bb73a3ddf5243b4a05da4751f8012b:119115376
t=2024-04-23T16:36:40+0000 lvl=info msg="Dropping payload from payload queue because the payload queue is too large" id=0xd7e58f60eaa99dfd749290515834d954aa2c3edc112dd2c91fa8e0938473b501:119115377
t=2024-04-23T16:36:40+0000 lvl=info msg="Dropping payload from payload queue because the payload queue is too large" id=0xc4bb1ef23c53d7b7cc4d6e28cb6908a41a0e5736bfe2407042afcaa0360e5659:119115378
t=2024-04-23T16:36:40+0000 lvl=info msg="Dropping payload from payload queue because the payload queue is too large" id=0x918350ad148b03ffadb504e6cb91d9d42164db3a3323cc120c892d0e12f970bc:119115379
t=2024-04-23T16:36:41+0000 lvl=info msg="Received signed execution payload from p2p" id=0xb47f958bbbca8c1c3b45dfe78150137bcd5a5cb7e6327da0e220e86267e5a286:119145712 peer=16Uiu2HAm7dhyfPKohuLyiZ2nUopuNUUdZhJSwxHymwbrGkuuJdZ7
t=2024-04-23T16:36:41+0000 lvl=info msg="Optimistically queueing unsafe L2 execution payload" id=0xb47f958bbbca8c1c3b45dfe78150137bcd5a5cb7e6327da0e220e86267e5a286:119145712
t=2024-04-23T16:36:44+0000 lvl=info msg="Received signed execution payload from p2p" id=0xd1f3e5440b113c5653aa5c64549dc3b7fa77af29ff61fc36f054562ab7753d42:119145713 peer=16Uiu2HAkxfLbtojJACcSDKzdG6QHd8q4TG9MtNCwEhqaneGTCvDw
t=2024-04-23T16:36:44+0000 lvl=info msg="Optimistically queueing unsafe L2 execution payload" id=0xd1f3e5440b113c5653aa5c64549dc3b7fa77af29ff61fc36f054562ab7753d42:119145713
t=2024-04-23T16:36:44+0000 lvl=info msg="Dropping payload from payload queue because the payload queue is too large" id=0xebcc7ea0713a8146e498cbfee0e29dc9bc3af70cdce0cac448eda4c8e3423e33:119115380
t=2024-04-23T16:36:44+0000 lvl=info msg="Dropping payload from payload queue because the payload queue is too large" id=0xd51a637d122f5db1ee8b3a86ff69ab78b50850ce93a66f5e356a8f644c55b9e0:119115381
t=2024-04-23T16:36:45+0000 lvl=info msg="Received signed execution payload from p2p" id=0xe897155cd8ae6fd2a07bdaa6db27dec141a91a4ec34743e9db7be3be7c8fa976:119145714 peer=16Uiu2HAm7dhyfPKohuLyiZ2nUopuNUUdZhJSwxHymwbrGkuuJdZ7
t=2024-04-23T16:36:45+0000 lvl=info msg="Optimistically queueing unsafe L2 execution payload" id=0xe897155cd8ae6fd2a07bdaa6db27dec141a91a4ec34743e9db7be3be7c8fa976:119145714
t=2024-04-23T16:36:45+0000 lvl=info msg="Dropping payload from payload queue because the payload queue is too large" id=0x56f18506d6b33b33561f0b0ffc085d65729ff5e87021cae98c791a42c06761d3:119115382
Please help @trianglesphere @geoknee @sebastianst @tynes
There is no way to do it. Let me take the liberty to say, is there really a team maintaining this chain?
It moves forward for me using an external beacon service, so something is off about the way these beacon endpoints are configured and the docs seem incomplete about how to configure and enable the needed data. I tried prysm without pruning, ie synced from genesis and no pruning flags enabled but no dice. So I tried lighthouse using checkpoint sync and turning prune-blobs to false, and enabing genesis-backfill. It does not seem that it backfills the blobs, however. I'm now trying lighthouse to sync from genesis with the same flags, we'll see if it works in a while. You can test your beacon endpoint like this:
curl http://192.168.0.17:55702/eth/v1/beacon/blob_sidecars/5335452
to test the blob sidecars, change the block num to what the logs say in the derivation error to see if your beacon endpoint has it. @alexqrid your error seems to indicate you're using checkpoint sync with no backfill if I was to guess, not even about the blob sidecars. t=2024-04-23T16:36:37+0000 lvl=warn msg="Derivation process temporary error" attempts=8595 err="stage 0 failed resetting: temp: failed to find the L2 Heads to start from: failed to fetch current L2 forkchoice state: failed to find the finalized L2 block: failed to parse L1 info deposit tx from L2 block: data is unexpected length: 260"
@Rogalek you need to look earlier in your logs for an error like Derivation process temporary error
to see what its saying, it may after a certain point stop showing that and just sync optimistically for a while, at least so I've found, especially if I change from consensus-layer sync to execution-layer sync, it'll just optimistically queue payloads blocks forever and never update geth head
We recently fixed something in our blob fetcher (https://github.com/ethereum-optimism/optimism/pull/10210) which I believe might fix some of the mentioned problems with blob fetching - we will release this very soon. In the meantime, you can already try out op-node/v1.7.4-rc.2.
Other than that, if you're syncing, the Beacon node you're using must have all historical blobs, or you must use a fallback beacon that has all historical blobs.
Closing this as we believe it has been fixed