bsc icon indicating copy to clipboard operation
bsc copied to clipboard

deadlock when calling eth.getLogs if fromBlock is small

Open github1youlc opened this issue 2 years ago • 8 comments

System information

Geth version: Geth Version: 1.1.15 Architecture: amd64 Go Version: go1.18.3 Operating System: linux

OS & Version: Linux

Expected behaviour

After attaching geth, this script

eth.getLogs({"fromBlock": "0x0", "toBlock": "0x3E8", "topics": [["0x3c13bc30b8e878c53fd2a36b679409c073afd75950be43d8858768e956fbc20e"]]})

should response normally.

Actual behaviour

It block forever. And some goroutines trapped in deadlock.

sync.runtime_SemacquireMutex+0x24 /home/ubuntu/opt/go/src/runtime/sema.go:71 sync.(*Mutex).lockSlow+0x164 /home/ubuntu/opt/go/src/sync/mutex.go:162 sync.(*Mutex).Lock+0x52 /home/ubuntu/opt/go/src/sync/mutex.go:81 sync.(*Once).doSlow+0x26 /home/ubuntu/opt/go/src/sync/once.go:64 sync.(*Once).Do+0x44 /home/ubuntu/opt/go/src/sync/once.go:59 github.com/ethereum/go-ethereum/core/bloombits.(*MatcherSession).Close+0x14 /home/ubuntu/projects/bsc-workspace/bsc-geth/core/bloombits/matcher.go:528 github.com/ethereum/go-ethereum/eth/filters.(*Filter).indexedLogs+0x4c9 /home/ubuntu/projects/bsc-workspace/bsc-geth/eth/filters/filter.go:201 github.com/ethereum/go-ethereum/eth/filters.(*Filter).Logs+0x1ab /home/ubuntu/projects/bsc-workspace/bsc-geth/eth/filters/filter.go:162 github.com/ethereum/go-ethereum/eth/filters.(*PublicFilterAPI).GetLogs+0x192 /home/ubuntu/projects/bsc-workspace/bsc-geth/eth/filters/api.go:355

sync.runtime_Semacquire+0x24 /home/ubuntu/opt/go/src/runtime/sema.go:56 sync.(*WaitGroup).Wait+0x51 /home/ubuntu/opt/go/src/sync/waitgroup.go:136 github.com/ethereum/go-ethereum/core/bloombits.(*MatcherSession).Close.func1+0x33 /home/ubuntu/projects/bsc-workspace/bsc-geth/core/bloombits/matcher.go:531 sync.(*Once).doSlow+0xc1 /home/ubuntu/opt/go/src/sync/once.go:68 sync.(*Once).Do+0x44 /home/ubuntu/opt/go/src/sync/once.go:59 github.com/ethereum/go-ethereum/core/bloombits.(*MatcherSession).Close+0x14 /home/ubuntu/projects/bsc-workspace/bsc-geth/core/bloombits/matcher.go:528 github.com/ethereum/go-ethereum/core/bloombits.(*MatcherSession).Multiplex+0x359 /home/ubuntu/projects/bsc-workspace/bsc-geth/core/bloombits/matcher.go:641

github.com/ethereum/go-ethereum/core/bloombits.(*Matcher).distributor+0x31e /home/ubuntu/projects/bsc-workspace/bsc-geth/core/bloombits/matcher.go:411 github.com/ethereum/go-ethereum/core/bloombits.(*Matcher).run.func2+0x24 /home/ubuntu/projects/bsc-workspace/bsc-geth/core/bloombits/matcher.go:253

Steps to reproduce the behaviour

Execute this script in attached geth terminal.

eth.getLogs({"fromBlock": "0x0", "toBlock": "0x3E8", "topics": [["0x3c13bc30b8e878c53fd2a36b679409c073afd75950be43d8858768e956fbc20e"]]})

github1youlc avatar Oct 10 '22 07:10 github1youlc

I have the same problem, hope it can be fixed soon !!!

iamman2021 avatar Oct 10 '22 10:10 iamman2021

thanks, the team will dig into this issue

j75689 avatar Oct 12 '22 01:10 j75689

Here it doesn't crash but the request hangs for a while and then times out. I figured, could this be related to pruning. Does running remove old logs? Newer logs work fine with the same node.

michaelr524 avatar Oct 17 '22 17:10 michaelr524

Here it doesn't crash but the request hangs for a while and then times out. I figured, could this be related to pruning. Does running remove old logs? Newer logs work fine with the same node.

Yes, newer logs works fine on my node too. The node data is downloaded from https://github.com/bnb-chain/bsc-snapshots.

github1youlc avatar Oct 18 '22 02:10 github1youlc

Same here. Will attempt full sync without snapshot.

michaelr524 avatar Oct 18 '22 07:10 michaelr524

same issue here?any progress?

kmalloc avatar Oct 26 '22 04:10 kmalloc

Worked around the issue by running a full sync from scratch. And then a prune. Didn't sync new blocks in a timely manner until after pruning. When downloading and starting from a snapshot the total size on disk quickly grows to the same size as a full sync so one should consider if using snapshots on a regular basis has enough benefits to outweigh the issues.

michaelr524 avatar Oct 26 '22 14:10 michaelr524

Apparently pruning removes tx data older than 3 months. Without pruning the node was lagging. And we need old data. So we are stuck at the moment. Please suggest what can be done to run a full node with all data?

michaelr524 avatar Oct 28 '22 11:10 michaelr524

Apparently pruning removes tx data older than 3 months. Without pruning the node was lagging. And we need old data. So we are stuck at the moment. Please suggest what can be done to run a full node with all data?

Use prune-state and not prune-block if you want to query old blocks, tx receipts etc.

jacobpake avatar Jan 30 '23 17:01 jacobpake

@github1youlc is this still an issue for you?

bnb-tw avatar Jul 05 '23 02:07 bnb-tw