bsc
bsc copied to clipboard
deadlock when calling eth.getLogs if fromBlock is small
System information
Geth version: Geth Version: 1.1.15 Architecture: amd64 Go Version: go1.18.3 Operating System: linux
OS & Version: Linux
Expected behaviour
After attaching geth, this script
eth.getLogs({"fromBlock": "0x0", "toBlock": "0x3E8", "topics": [["0x3c13bc30b8e878c53fd2a36b679409c073afd75950be43d8858768e956fbc20e"]]})
should response normally.
Actual behaviour
It block forever. And some goroutines trapped in deadlock.
sync.runtime_SemacquireMutex+0x24 /home/ubuntu/opt/go/src/runtime/sema.go:71 sync.(*Mutex).lockSlow+0x164 /home/ubuntu/opt/go/src/sync/mutex.go:162 sync.(*Mutex).Lock+0x52 /home/ubuntu/opt/go/src/sync/mutex.go:81 sync.(*Once).doSlow+0x26 /home/ubuntu/opt/go/src/sync/once.go:64 sync.(*Once).Do+0x44 /home/ubuntu/opt/go/src/sync/once.go:59 github.com/ethereum/go-ethereum/core/bloombits.(*MatcherSession).Close+0x14 /home/ubuntu/projects/bsc-workspace/bsc-geth/core/bloombits/matcher.go:528 github.com/ethereum/go-ethereum/eth/filters.(*Filter).indexedLogs+0x4c9 /home/ubuntu/projects/bsc-workspace/bsc-geth/eth/filters/filter.go:201 github.com/ethereum/go-ethereum/eth/filters.(*Filter).Logs+0x1ab /home/ubuntu/projects/bsc-workspace/bsc-geth/eth/filters/filter.go:162 github.com/ethereum/go-ethereum/eth/filters.(*PublicFilterAPI).GetLogs+0x192 /home/ubuntu/projects/bsc-workspace/bsc-geth/eth/filters/api.go:355
sync.runtime_Semacquire+0x24 /home/ubuntu/opt/go/src/runtime/sema.go:56 sync.(*WaitGroup).Wait+0x51 /home/ubuntu/opt/go/src/sync/waitgroup.go:136 github.com/ethereum/go-ethereum/core/bloombits.(*MatcherSession).Close.func1+0x33 /home/ubuntu/projects/bsc-workspace/bsc-geth/core/bloombits/matcher.go:531 sync.(*Once).doSlow+0xc1 /home/ubuntu/opt/go/src/sync/once.go:68 sync.(*Once).Do+0x44 /home/ubuntu/opt/go/src/sync/once.go:59 github.com/ethereum/go-ethereum/core/bloombits.(*MatcherSession).Close+0x14 /home/ubuntu/projects/bsc-workspace/bsc-geth/core/bloombits/matcher.go:528 github.com/ethereum/go-ethereum/core/bloombits.(*MatcherSession).Multiplex+0x359 /home/ubuntu/projects/bsc-workspace/bsc-geth/core/bloombits/matcher.go:641
github.com/ethereum/go-ethereum/core/bloombits.(*Matcher).distributor+0x31e /home/ubuntu/projects/bsc-workspace/bsc-geth/core/bloombits/matcher.go:411 github.com/ethereum/go-ethereum/core/bloombits.(*Matcher).run.func2+0x24 /home/ubuntu/projects/bsc-workspace/bsc-geth/core/bloombits/matcher.go:253
Steps to reproduce the behaviour
Execute this script in attached geth terminal.
eth.getLogs({"fromBlock": "0x0", "toBlock": "0x3E8", "topics": [["0x3c13bc30b8e878c53fd2a36b679409c073afd75950be43d8858768e956fbc20e"]]})
I have the same problem, hope it can be fixed soon !!!
thanks, the team will dig into this issue
Here it doesn't crash but the request hangs for a while and then times out. I figured, could this be related to pruning. Does running remove old logs? Newer logs work fine with the same node.
Here it doesn't crash but the request hangs for a while and then times out. I figured, could this be related to pruning. Does running remove old logs? Newer logs work fine with the same node.
Yes, newer logs works fine on my node too. The node data is downloaded from https://github.com/bnb-chain/bsc-snapshots.
Same here. Will attempt full sync without snapshot.
same issue here?any progress?
Worked around the issue by running a full sync from scratch. And then a prune. Didn't sync new blocks in a timely manner until after pruning. When downloading and starting from a snapshot the total size on disk quickly grows to the same size as a full sync so one should consider if using snapshots on a regular basis has enough benefits to outweigh the issues.
Apparently pruning removes tx data older than 3 months. Without pruning the node was lagging. And we need old data. So we are stuck at the moment. Please suggest what can be done to run a full node with all data?
Apparently pruning removes tx data older than 3 months. Without pruning the node was lagging. And we need old data. So we are stuck at the moment. Please suggest what can be done to run a full node with all data?
Use prune-state
and not prune-block
if you want to query old blocks, tx receipts etc.
@github1youlc is this still an issue for you?