nethermind icon indicating copy to clipboard operation
nethermind copied to clipboard

RPC: ```trace_filter``` results in ```internal error```

Open SinErgy84 opened this issue 3 years ago • 6 comments

Describe the bug While tracing different transactions on the mainnet for my use case, I've found a bug in trace_filter if block delta between FromBlock and ToBlock gets to large. The response from the client looks like this:

"message": "Error during rpc-method >>trace_filter<<: Internal error",                                                                                                                                                                                                     
        "data": "Nethermind.Trie.TrieException: Failed to load key d16a7c9c2fe5097520b1c6510a739bdfcd7ccad01f5e9b1d318829b6d36d7209 from root hash 0xb095023d1261c916200802ffbdd6c5869cf12f5014bedd86b54e3980a2260659.
 ---> Nethermind.Trie.TrieException: Node 0xb095023d1261c916200802ffbdd6c5869cf12f5014bedd86b54e3980a2260659 is missing from the DB
   at Nethermind.Trie.Pruning.TrieStore.LoadRlp(Keccak keccak, IKeyValueStore keyValueStore)
   at Nethermind.Trie.TrieNode.ResolveNode(ITrieNodeResolver tree)
   at Nethermind.Trie.PatriciaTree.Run(Span`1 updatePath, Int32 nibblesCount, Byte[] updateValue, Boolean isUpdate, Boolean ignoreMissingDelete, Keccak startRootHash)
   at Nethermind.Trie.PatriciaTree.Get(Span`1 rawKey, Keccak rootHash)
   --- End of inner exception stack trace ---
   at Nethermind.Trie.PatriciaTree.Get(Span`1 rawKey, Keccak rootHash)
   at Nethermind.State.StateProvider.GetState(Address address)
   at Nethermind.State.StateProvider.GetAndAddToCache(Address address)
   at Nethermind.State.StateProvider.GetNonce(Address address)
   at Nethermind.Consensus.Processing.BlockProcessor.BlockValidationTransactionsExecutor.ProcessTransactions(Block block, ProcessingOptions processingOptions, BlockReceiptsTracer receiptsTracer, IReleaseSpec spec)
   at Nethermind.Consensus.Processing.BlockProcessor.ProcessBlock(Block block, IBlockTracer blockTracer, ProcessingOptions options)
   at Nethermind.Consensus.Processing.BlockProcessor.ProcessOne(Block suggestedBlock, ProcessingOptions options, IBlockTracer blockTracer)
   at Nethermind.Consensus.Processing.BlockProcessor.Process(Keccak newBranchStateRoot, List`1 suggestedBlocks, ProcessingOptions options, IBlockTracer blockTracer)
   at Nethermind.Consensus.Processing.BlockchainProcessor.ProcessBranch(ProcessingBranch processingBranch, Block suggestedBlock, ProcessingOptions options, IBlockTracer tracer)
   at Nethermind.Consensus.Processing.BlockchainProcessor.Process(Block suggestedBlock, ProcessingOptions options, IBlockTracer tracer)
   at Nethermind.Consensus.Processing.OneTimeChainProcessor.Process(Block block, ProcessingOptions options, IBlockTracer tracer)
   at Nethermind.Consensus.Tracing.Tracer.Trace(Block block, IBlockTracer blockTracer)
   at Nethermind.JsonRpc.Modules.Trace.TraceRpcModule.TraceBlock(Block block, ParityTraceTypes traceTypes, TxTraceFilter txTraceFilter)
   at Nethermind.JsonRpc.Modules.Trace.TraceRpcModule.trace_filter(TraceFilterForRpc traceFilterForRpc)",     
        "code": -32603,

To Reproduce Call curl --data '{"method":"trace_filter","params":[{"fromBlock":"0xe3ed9c","toBlock":"0xe3eeab"}],"id":1,"jsonrpc":"2.0"}' -H "Content-Type: application/json" -X POST 172.17.0.1:18546 for example with block delta from 14937500 to 14937771

Expected behavior

  • either valid output of trace_filter
  • or adequate error message, e.g. memory limit, instead of internal error
  • or suggestion for adjusting the config parameters to avoid the problem

Desktop (please complete the following information):

  • OS: Ubuntu 22.04 LTS
  • nethermind 1.13.2

Additional context The client is fully synced with the mainnet. config:

NETHERMIND_CONFIG="mainnet"
NETHERMIND_JSONRPCCONFIG_ENABLED="true"
NETHERMIND_JSONRPCCONFIG_HOST="0.0.0.0"
NETHERMIND_NETWORKCONFIG_MAXACTIVEPEERS="15"
NETHERMIND_INITCONFIG_LOGFILENAME="log.txt"
#
# Special
#
#NETHERMIND_INITCONFIG_STATICNODESPATH="static-peers.json"
#NETHERMIND_NETWORKCONFIG_ONLYSTATICPEERS="true"
NETHERMIND_NETWORKCONFIG_P2PPORT="30304"
NETHERMIND_NETWORKCONFIG_DISCOVERYPORT="30304"
NETHERMIND_JSONRPCCONFIG_PORT=8546

SinErgy84 avatar Jun 10 '22 10:06 SinErgy84

This is due to pruning or fast syncing from block X, big/full ranges will only work on archive mode. I think in all call/trace methods we need to catch those exceptions and return information about state being not available.

LukaszRozmej avatar Jun 24 '22 09:06 LukaszRozmej

@LukaszRozmej why it worked fine on parity (openethereum) even on a fast node with pruning activated?

gituser avatar Jun 24 '22 10:06 gituser

@LukaszRozmej why it worked fine on parity (openethereum) even on a fast node with pruning activated? separate

Parity stored traces in a separate database. We are not. We could potentially change that but only for full synced nodes (with pruning). The tradeoff would be an increased disk footprint.

LukaszRozmej avatar Jun 24 '22 13:06 LukaszRozmej

@LukaszRozmej I think many here are migrating from OpenEthereum (Parity) so for them that solution would be viable, please implement it if it's easy and possible.

Increased disk footprint like +50-100 GB? That's much better tradeoff comparing to running Archive node which requires 6TB space. Thank you.

gituser avatar Jun 24 '22 13:06 gituser

@gituser It is possible, but far from easy. OE/Parity handles sync of this DB in their warp sync?

LukaszRozmej avatar Jun 27 '22 09:06 LukaszRozmej

@gituser It is possible, but far from easy. OE/Parity handles sync of this DB in their warp sync?

Not sure, but I think OE/parity only allowed tracing if you sync from scratch using Fast sync. Also you had to specify tracing: on in database options before starting to sync in order to generate traces DB.

gituser avatar Jun 27 '22 11:06 gituser

@gituser @SinErgy84 created a plugin that stores the traces and retrives them through json rpc, the only thing is this needs FullSync (but not archive state).

LukaszRozmej avatar Sep 07 '22 08:09 LukaszRozmej