erigon icon indicating copy to clipboard operation
erigon copied to clipboard

Erigon (Polygon Mainnet Archive) syncing slow/stuck at Execution stage

Open Jyoti-singh18 opened this issue 1 year ago • 11 comments

Our erigon node for Polygon mainnet archive is syncing very slow or is rather stuck on Execution stage [7/15 Execution] from few weeks before this it was stuck on Bodies stage. Below are some of the logs that shows its syncing but the Execution stage is continuing from more than couple weeks.

[INFO] [07-13|12:49:40.944] Fetching state updates from Heimdall fromID=2703876 to=2023-06-23T23:12:46Z [INFO] [07-13|12:49:40.947] StateSyncData number=44261792 lastStateID=2703875 total records=0 fetch time=2 process time=0 [INFO] [07-13|12:49:41.958] Fetching state updates from Heimdall fromID=2703876 to=2023-06-23T23:13:20Z [INFO] [07-13|12:49:41.961] StateSyncData number=44261808 lastStateID=2703875 total records=0 fetch time=3 process time=0 [INFO] [07-13|12:49:42.016] [txpool] stat pending=10000 baseFee=0 queued=30000 alloc=8.5GB sys=12.2GB [INFO] [07-13|12:49:42.570] Fetching state updates from Heimdall fromID=2703876 to=2023-06-23T23:13:54Z [INFO] [07-13|12:49:42.573] StateSyncData number=44261824 lastStateID=2703875 total records=0 fetch time=2 process time=0 [INFO] [07-13|12:49:43.591] Fetching state updates from Heimdall fromID=2703876 to=2023-06-23T23:14:28Z [INFO] [07-13|12:49:43.595] StateSyncData number=44261840 lastStateID=2703875 total records=0 fetch time=3 process time=0 [INFO] [07-13|12:49:44.522] [7/15 Execution] Executed blocks number=44261852 blk/s=18.1 tx/s=814.8 Mgas/s=274.9 gasState=0.22 batch=195.3MB alloc=8.6GB sys=12.2GB [INFO] [07-13|12:49:45.261] Fetching state updates from Heimdall fromID=2703876 to=2023-06-23T23:15:04Z [INFO] [07-13|12:49:45.264] StateSyncData number=44261856 lastStateID=2703875 total records=0 fetch time=3 process time=0 [INFO] [07-13|12:49:46.251] Fetching state updates from Heimdall fromID=2703876 to=2023-06-23T23:15:45Z

Below is the output from "eth_syncing" call shows different blocks for different stages.

{ "jsonrpc": "2.0", "id": 1, "result": { "currentBlock": "0x2a319f0", "highestBlock": "0x2aac6b9", "stages": [ { "stage_name": "Snapshots", "block_number": "0x2a319f0" }, { "stage_name": "Headers", "block_number": "0x2aac6b9" }, { "stage_name": "BlockHashes", "block_number": "0x2aac6b9" }, { "stage_name": "Bodies", "block_number": "0x2aac6b9" }, { "stage_name": "Senders", "block_number": "0x2aac6b9" }, { "stage_name": "Execution", "block_number": "0x2a341eb" }, { "stage_name": "Translation", "block_number": "0x0" }, { "stage_name": "HashState", "block_number": "0x2a319f0" }, { "stage_name": "IntermediateHashes", "block_number": "0x2a319f0" }, { "stage_name": "AccountHistoryIndex", "block_number": "0x2a319f0" }, { "stage_name": "StorageHistoryIndex", "block_number": "0x2a319f0" }, { "stage_name": "LogIndex", "block_number": "0x2a319f0" }, { "stage_name": "CallTraces", "block_number": "0x2a319f0" }, { "stage_name": "TxLookup", "block_number": "0x2a319f0" }, { "stage_name": "Finish", "block_number": "0x2a319f0" } ] } }

The erigon snapshot used is from Polygon official site @ https://snapshots.polygon.technology/

The disk used is 16TB SSD, RAM more than 120G. The current disk size is at below:

image

Another thing to note is we recently upgraded Erigon to [v0.0.8] ( for Polygon Indore HardFork) which is v2.48.0 on Erigon Repo

Could anyone suggest what may be causing the slowness ? .

Jyoti-singh18 avatar Jul 13 '23 13:07 Jyoti-singh18