go-opera icon indicating copy to clipboard operation
go-opera copied to clipboard

TypeError: cannot read property 'calls' of undefined while tracing using debug_traceBlockByNumber

Open cp0k opened this issue 2 years ago • 9 comments

Describe the bug I am consistently seeing the following error on some specific blocks while fetching certain Fantom traces using the method debug_traceBlockByNumber.

ie: "error": "TypeError: cannot read property 'calls' of undefined in server-side tracer function 'fault'"

Upgrading from Go-Opera V1.1.1-txtracing-rc.1 to latest Go-Opera V1.1.1-txtracing-rc.2 did not resolve this issue.

To Reproduce

To see TypeScript error when calling on blocks:

  • 40092522
  • 40092519
  • 40092534
  • 40092535
  • 40092546
  • 40092543
  • 40092549
  • 40092556

Steps to reproduce the behavior:

curl -X POST 'https://RPC_PROVIDER/'
--header 'Content-Type: application/json'
--data-raw '{"jsonrpc":"2.0","method":"debug_traceBlockByNumber","params":["0x263c38c", {"tracer": "callTracer", "timeout": "119s"}],"id":0}'

And, when using the same command but this time with a different block (0x263c48d), the TypeScript error does not appear.

curl -X POST 'https://RPC_PROVIDER/'
--header 'Content-Type: application/json' --data-raw '{"jsonrpc":"2.0","method":"debug_traceBlockByNumber","params":["0x263c48d", {"tracer": "callTracer", "timeout": "119s"}],"id":0}'

Expected behavior For the results to be consistent when between blocks.

Screenshots TypeError:

Screen Shot 2022-08-23 at 4 21 11 PM

Expected result (no error):

Screen Shot 2022-08-23 at 4 51 24 PM

I do not expect the TypeError to appear prior to the rest of the result of the query.

Versions:

  • Go-Opera Version: 1.1.1-txtracing-rc.1
  • OS: [Linux Ubuntu VERSION="20.04.4 LTS]

cp0k avatar Aug 23 '22 21:08 cp0k

+1 Also experiencing this behaviour

ricardo-gregorio avatar Aug 29 '22 15:08 ricardo-gregorio

This error is caused by bug in call_tracer.js from go-ethereum 1.10.8, which is used in latest Opera release.

As a workaround you can use the legacy call tracer: ("tracer": "callTracerLegacy")

For your example:

{"jsonrpc":"2.0","method":"debug_traceBlockByNumber","params":["0x263c38c", {"tracer": "callTracerLegacy", "timeout": "119s"}],"id":0}

You can also pass your own javascript code directly into the "tracer" parameter.

thaarok avatar Sep 01 '22 07:09 thaarok

Following change needs to be backported to Fantom's fork of go-ethereum used by txtracing branch to resolve this: https://github.com/ethereum/go-ethereum/pull/23667/files

thaarok avatar Sep 01 '22 08:09 thaarok

The issue is fixed in release/txtracing/1.1.1-rc.2 branch.

thaarok avatar Sep 01 '22 17:09 thaarok

Hi @hkalina !

We checked out 0b72c064bc49b209455a983b160df6e94e730362 and it was totally impossible to use. Sometimes it won't even add a new block once restarted and other times it will never catch the tip after 30 minutes, continually getting behind blocks.

Reverting to the release version gets the node back to the tip.

Do you expect this to occur with the changes you made? Would we need to do any changes in our config/flags?

Thanks!

qk-santi avatar Sep 08 '22 18:09 qk-santi

I am running the version 0b72c064bc49b209455a983b160df6e94e730362 on the existing database OK - keeps on the head. Can you provide more info about your hw configuration/your cpu/memory/IO usage? Is your node under RPC load, or is it behind even for no-RPC run?

Thanks for your feedback

thaarok avatar Sep 12 '22 16:09 thaarok

Hi @hkalina, I am working with @qk-santi on this. The nodes are all 12 core, 32 GB nodes. These nodes have been running fine, keeping up to tip, and serving traffic. So we are trying to upgrade working nodes. It appears that these nodes are not getting many peers. The pre-upgrade configuration all have over 200 peers. After the upgrade, they are struggling to go above 50 peers, some are staying in the low teen number of peers. This is our start up command:

opera  --http --http.addr 0.0.0.0 --maxpeers 250 --http.api=ftm,sfc,eth,debug,web3,net,txpool,trace,dag --ws --ws.addr 0.0.0.0 --ws.api="ftm,sfc,eth,debug,web3,net,txpool,trace,dag" --ws.origins * --tracenode --rpc.gascap 600000000 --rpc.txfeecap 0

Any idea how we can help it to find more peers? Or, can I export the peers list from the working nodes to the newly upgraded ones somehow? Not quite sure why it's struggling to find peers.

davidkarim avatar Sep 17 '22 05:09 davidkarim

Looks like we get better results when we remove the node key prior to restarting the client service. Generally, after doing this, nodes were getting back to tip within 10 minutes or so. On average they were getting to about 15 peers and that was enough to get them caught up.

davidkarim avatar Sep 19 '22 17:09 davidkarim

Maybe this can be related to a discovery protocol issue - can you please retest with my patched goeth library?

go mod edit -replace github.com/ethereum/go-ethereum=github.com/hkalina/go-ethereum@66dace009c936fdcd4ed57310629b568cc0c31fd
go mod download github.com/ethereum/go-ethereum
make

Please also remove the go-opera directory in the datadir, to drop old peers database.

thaarok avatar Sep 19 '22 19:09 thaarok

Hi @cp0k , may you try with new release release/txtracing/1.1.2-rc.5 or release/txtracing/1.1.2-rc.6?

quan8 avatar Apr 29 '23 22:04 quan8