go-ethereum
go-ethereum copied to clipboard
Need to prune Geth but not enough blocks
Geth version: 1.10.21-stable OS & Version: Linux Ubuntu Server
Hi everyone, I need to prune my geth for prepared to the "merge" but the result from geth snapshot prune-state --datadir /my/data/dir I've a error "Snaphot not old enough yet: need 128 more blocks". but my chain is sync after shutdown geth.
I've never prune geth so now the size of my chaindata are around 1.4TB (with 300GB of ancient).
$ sudo cat /etc/systemd/system/eth1.service ...... User=goeth Group=goeth ExecStart = /usr/bin/geth --cache=2048 --datadir /var/lib/goethereum --identity "xxxx" --http --http.api "admin,db,eth,net,web3" --http.port 8545 --http.addr "0.0.0.0" --authrpc.jwtsecret xxxxx
$ sudo -u goeth geth --datadir /var/lib/goethereum snapshot prune-state INFO [08-01|11:52:45.130] Maximum peer count ETH=50 LES=0 total=50 INFO [08-01|11:52:45.132] Smartcard socket not found, disabling err="stat /run/pcscd/pcscd.comm: no such file or directory" INFO [08-01|11:52:45.140] Set global gas cap cap=50,000,000 INFO [08-01|11:52:45.141] Allocated cache and file handles database=/var/lib/goethereum/geth/chaindata cache=512.00MiB handles=524,288 INFO [08-01|11:52:54.799] Opened ancient database database=/var/lib/goethereum/geth/chaindata/ancient readonly=false WARN [08-01|11:52:54.858] Snapshot maintenance disabled (syncing) INFO [08-01|11:52:55.002] Initialized state bloom size=2.00GiB ERROR[08-01|11:53:03.572] Failed to prune state err="snapshot not old enough yet: need 128 more blocks" snapshot not old enough yet: need 128 more blocks
$ sudo du -h /var/lib/goethereum/geth/chaindata/ 290G /var/lib/goethereum/geth/chaindata/ancient 1.1T /var/lib/goethereum/geth/chaindata/
Thanks
Snapshot is required for running pruning mechanism. Your node doesn't have enough snapshot layers.
You can just run the node for a while(a couple of minutes), then it should have more than 128 layers accumulated and then you can run this command.
Snapshot is required for running pruning mechanism. Your node doesn't have enough snapshot layers.
You can just run the node for a while(a couple of minutes), then it should have more than 128 layers accumulated and then you can run this command.
After each try I check if my chain is sync so normally I have my 128 blocks.
According to your log, it said Snapshot maintenance disabled (syncing)
. Looks like your node is not synced yet. Can you share some latest logs?
According to your log, it said
Snapshot maintenance disabled (syncing)
. Looks like your node is not synced yet. Can you share some latest logs?
Aug 04 18:37:57 eth-node1 geth[968641]: INFO [08-04|18:37:57.749] State heal in progress accounts=2,709,[email protected] slots=634,[email protected] codes=120,[email protected] nodes=1,571,905,[email protected] pending=176,545
Aug 04 18:38:06 eth-node1 geth[968641]: INFO [08-04|18:38:06.652] State heal in progress accounts=2,709,[email protected] slots=634,[email protected] codes=120,[email protected] nodes=1,571,906,[email protected] pending=177,593
Aug 04 18:38:16 eth-node1 geth[968641]: INFO [08-04|18:38:16.660] State heal in progress accounts=2,709,[email protected] slots=634,[email protected] codes=120,[email protected] nodes=1,571,907,[email protected] pending=178,442
Aug 04 18:38:24 eth-node1 geth[968641]: INFO [08-04|18:38:24.723] State heal in progress accounts=2,709,[email protected] slots=634,[email protected] codes=120,[email protected] nodes=1,571,908,[email protected] pending=179,936
Aug 04 18:38:28 eth-node1 geth[968641]: INFO [08-04|18:38:28.890] Imported new block headers count=1 elapsed=43.432ms number=15,276,995 hash=0044d4..935ecc age=1m26s
Aug 04 18:38:34 eth-node1 geth[968641]: INFO [08-04|18:38:34.137] State heal in progress accounts=2,710,[email protected] slots=634,[email protected] codes=120,[email protected] nodes=1,571,909,[email protected] pending=180,059
Aug 04 18:38:34 eth-node1 geth[968641]: INFO [08-04|18:38:34.934] Imported new block headers count=1 elapsed=27.443ms number=15,276,996 hash=98bff4..9a6b8f age=1m4s
Aug 04 18:38:37 eth-node1 geth[968641]: INFO [08-04|18:38:37.971] Imported new block headers count=1 elapsed=23.947ms number=15,276,997 hash=08f7b1..7ac011
Aug 04 18:38:42 eth-node1 geth[968641]: INFO [08-04|18:38:42.545] State heal in progress accounts=2,710,[email protected] slots=634,[email protected] codes=120,[email protected] nodes=1,571,909,[email protected] pending=180,994
Aug 04 18:38:50 eth-node1 geth[968641]: INFO [08-04|18:38:50.619] State heal in progress accounts=2,710,[email protected] slots=635,[email protected] codes=120,[email protected] nodes=1,571,910,[email protected] pending=181,804
Aug 04 18:38:53 eth-node1 geth[968641]: INFO [08-04|18:38:53.096] Imported new block headers count=1 elapsed=25.802ms number=15,276,998 hash=90901c..97e30a
Aug 04 18:38:53 eth-node1 geth[968641]: INFO [08-04|18:38:53.552] Downloader queue stats receiptTasks=0 blockTasks=0 itemSize=329.68KiB throttle=796
Aug 04 18:38:58 eth-node1 geth[968641]: INFO [08-04|18:38:58.791] State heal in progress accounts=2,710,[email protected] slots=635,[email protected] codes=120,[email protected] nodes=1,571,912,[email protected] pending=182,566
Aug 04 18:39:02 eth-node1 geth[968641]: INFO [08-04|18:39:02.182] Imported new block headers count=1 elapsed=32.087ms number=15,276,999 hash=da4dd6..478f2d
Aug 04 18:39:02 eth-node1 geth[968641]: WARN [08-04|18:39:02.560] Pivot became stale, moving old=15,276,873 new=15,276,937
Aug 04 18:39:03 eth-node1 geth[968641]: INFO [08-04|18:39:03.460] Imported new block receipts count=64 elapsed=893.159ms number=15,276,936 hash=433146..07dffc age=15m2s size=9.71MiB
Aug 04 18:39:04 eth-node1 geth[968641]: WARN [08-04|18:39:04.217] Unexpected trienode heal packet peer=a96cd7fd reqid=6,982,111,323,957,963,683
Aug 04 18:39:04 eth-node1 geth[968641]: INFO [08-04|18:39:04.217] State heal in progress accounts=2,711,[email protected] slots=635,[email protected] codes=120,[email protected] nodes=1,571,913,[email protected] pending=182,952
~$ date
Thu Aug 4 18:40:36 CEST 2022
~$ geth attach http://127.0.0.1:8545
Welcome to the Geth JavaScript console!
instance: Geth/xxxx/v1.10.21-stable-67109427/linux-amd64/go1.18.4
at block: 0 (Thu Jan 01 1970 01:00:00 GMT+0100 (CET))
datadir: /var/lib/goethereum
modules: admin:1.0 eth:1.0 net:1.0 rpc:1.0 web3:1.0
To exit, press ctrl-d or type exit
> eth.syncing
{
currentBlock: 15276936,
healedBytecodeBytes: 981266211,
healedBytecodes: 120860,
healedTrienodeBytes: 498289494317,
healedTrienodes: 1571927631,
healingBytecode: 0,
healingTrienodes: 1733,
highestBlock: 15277003,
startingBlock: 15252219,
syncedAccountBytes: 35196561409,
syncedAccounts: 149821236,
syncedBytecodeBytes: 2419305436,
syncedBytecodes: 429905,
syncedStorage: 496909118,
syncedStorageBytes: 106873240256
Aug 04 18:38:34 eth-node1 geth[968641]: INFO [08-04|18:38:34.137] State heal in progress accounts=2,710,[email protected] slots=634,[email protected] codes=120,[email protected] nodes=1,571,909,[email protected] pending=180,059
Yes, your node is still healing, not synced yet.
State heal in progress, if this doesn't complete in hours or at most a day or two then you do not have sufficient IOPS and need faster storage.
Not a bug.
State heal in progress, if this doesn't complete in hours or at most a day or two then you do not have sufficient IOPS and need faster storage.
Not a bug.
I use proxmox, but specificly for my node a SSD Intel DC NVME (636500/111500 IOPS) with ZFS partition so normally I have enough IOPS or speed.
I've delete all my chain with geth removedb
and now the "pending" decrease well.
$ fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.16
Starting 1 process
test: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=63.1MiB/s,w=20.6MiB/s][r=16.2k,w=5275 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=1236154: Tue Aug 9 17:51:02 2022
read: IOPS=14.1k, BW=55.1MiB/s (57.8MB/s)(3070MiB/55720msec)
bw ( KiB/s): min=36160, max=65520, per=99.96%, avg=56396.22, stdev=6977.85, samples=111
iops : min= 9040, max=16380, avg=14099.06, stdev=1744.53, samples=111
write: IOPS=4713, BW=18.4MiB/s (19.3MB/s)(1026MiB/55720msec); 0 zone resets
bw ( KiB/s): min=11936, max=22320, per=99.97%, avg=18849.02, stdev=2323.78, samples=111
iops : min= 2984, max= 5580, avg=4712.23, stdev=580.93, samples=111
cpu : usr=25.11%, sys=74.61%, ctx=271, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=55.1MiB/s (57.8MB/s), 55.1MiB/s-55.1MiB/s (57.8MB/s-57.8MB/s), io=3070MiB (3219MB), run=55720-55720msec
WRITE: bw=18.4MiB/s (19.3MB/s), 18.4MiB/s-18.4MiB/s (19.3MB/s-19.3MB/s), io=1026MiB (1076MB), run=55720-55720msec
Disk stats (read/write):
dm-1: ios=813562/262866, merge=0/0, ticks=142068/49844, in_queue=191912, util=99.90%, aggrios=815839/263373, aggrmerge=0/258, aggrticks=152103/52746, aggrin_queue=14096, aggrutil=99.88%
sda: ios=815839/263373, merge=0/258, ticks=152103/52746, in_queue=14096, util=99.88%
$ sudo iostat -mdx 240 2
Linux 5.4.0-121-generic (eth-node1) 08/09/22 _x86_64_ (16 CPU)
Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz aqu-sz %util
dm-1 526.94 5.04 0.00 0.00 0.68 9.79 38.65 3.03 0.00 0.00 1.71 80.18 0.50 0.60 0.00 0.00 0.15 1223.83 0.42 85.09
loop0 0.00 0.00 0.00 0.00 0.51 8.08 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop1 0.00 0.00 0.00 0.00 0.77 10.99 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop2 0.00 0.00 0.00 0.00 0.65 10.26 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop3 0.00 0.00 0.00 0.00 0.85 2.69 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop4 0.00 0.00 0.00 0.00 0.86 15.53 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop5 0.00 0.00 0.00 0.00 0.98 21.38 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop6 0.12 0.00 0.00 0.00 0.43 1.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop7 0.00 0.00 0.00 0.00 0.70 12.76 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop8 0.00 0.00 0.00 0.00 0.48 7.65 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 524.13 5.04 2.81 0.53 0.69 9.84 21.17 3.02 19.39 47.81 1.96 146.12 0.50 0.60 0.00 0.00 0.17 1224.60 0.02 85.09
Nowadays, it's better to run geth with --scheme=path
, then you get pruning out of the box.