fix: ETH RPC API: ETH Call should use the parent state root of the subsequent tipset
Related Issues
Closes #11205
Proposed Changes
ETH Call should use the parent state root of the subsequent tipset. This can be obtained from the TipsetState on the Statemanager
Additional Info
Checklist
Before you mark the PR ready for review, please make sure that:
- [ ] Commits have a clear commit message.
- [ ] PR title is in the form of of
<PR type>: <area>: <change being made>- example:
fix: mempool: Introduce a cache for valid signatures -
PR type: fix, feat, build, chore, ci, docs, perf, refactor, revert, style, test -
area, e.g. api, chain, state, market, mempool, multisig, networking, paych, proving, sealing, wallet, deps
- example:
- [ ] If the PR affects users (e.g., new feature, bug fix, system requirements change), update the CHANGELOG.md and add details to the UNRELEASED section.
- [ ] New features have usage guidelines and / or documentation updates in
- [ ] Tests exist for new functionality or change in behavior
- [ ] CI is green
@aarshkshah1992 - can you please assign a reviewer?
Well, this is bringing back past nightmares with respect to https://github.com/filecoin-project/lotus/pull/10216.
@Stebalien Are you saying that this wont work ?
Hey @raulk
This PR needs a review from someone who understands the ETH RPC stack really well. But @Stebalien is fully focused on F3 currently. Would you or Mikers or Fridrick be able to help review this by any chance ?
@rvagg Have addressed both your review comments. Please can you take one final look at the PR ?
@rvagg @aarshkshah1992 can we get a status update here? What's needed to get this fully reviewed?
@momack2 just waiting on expert eyes to review so the blind don't lead the blind down a dodgy alleyway
All checks have completed
❌ Failed Test / Test (itest-gateway) (pull_request) ❌ Failed Test / Test (itest-path_type_filters) (pull_request)
@raulk I agree. We have removed the looping so if the user specifies a tipset that can cause an expensive fork, we just error out the request(should be few of those as Rodd mentions).
@momack2 We were waiting on a review from someone who understands the ETH<>Filecoin stack well. Now that Raul has reviewed it, we can land this. Have addressed the latest round of review. Will merge on a green tick.
Will address @raulk 's response in a future PR when he takes a look at this as we're not changing any pre-existing behaviour here wrt handling migration epochs.
@raulk:
- But this is the wrong way to go about it. Events should always be served from the event index, and the event index should track its own data availability and reject queries for epochs it has no data for.
- In fact, at IPC we've had two or three episodes of event data inconsistency from Lotus, and I suspect it has to do with Lotus happily returning empty answers for epoch ranges it has not indexed (instead of erroring with "epochs not indexed").
Some relevant issues that you might want to weigh in on because there's known problems all over this that are difficult to prioritise addressing without hearing feedback from users. If these are problems for you, you should let us know and they can be bumped up in priority:
- Events source of truth: db or receipts #11830
- Record processed epochs / tipsets in events index #11640
- EventFilterManager needs to deal in epochs #11680
And there's more in the list, if you want to have a look at the things we're currently aware of: https://github.com/filecoin-project/lotus/issues?q=is%3Aopen+is%3Aissue+label%3Aarea%2Fevents
@raulk Can we go ahead and merge this one given that we're now co-ordinating on a separate workstream for events ?
@aarshkshah1992 I am good with merging this as-is and addressing the remaining design issues next, as long as someone focuses on them straight after. We amassed good momentum in discussions, we've all "paged back into" the subject, and it would be a net loss of collective productivity if we merge this PR and mentally move on.
@raulk Vulcanise is also running into some of the rough edges related to the events. So we are starting the work with fixing one of the issues they're running into i.e. async indexing of events leading to races where client discovers a block but does not get any events for them because they're yet to be indexed.
However, please note that we don't have dedicated epic to fix all of the issues you and @rvagg have enlisted above. If doing all of those is important, would be great to do a prioritsation epic for this workstream with @jennijuju and @rvagg.
@aarshkshah1992 this PR sped the eth_call RPC call up considerably!
Before:
ab -p postdata.json -T application/json -c 10 -n 1000 http://127.0.0.1:1234/rpc/v1
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Benchmarking 127.0.0.1 (be patient)
Server Software:
Server Hostname: 127.0.0.1
Server Port: 1234
Document Path: /rpc/v1
Document Length: 103 bytes
Concurrency Level: 10
Time taken for tests: 127.168 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 221000 bytes
Total body sent: 394000
HTML transferred: 103000 bytes
Requests per second: 7.86 [#/sec] (mean)
Time per request: 1271.684 [ms] (mean)
Time per request: 127.168 [ms] (mean, across all concurrent requests)
Transfer rate: 1.70 [Kbytes/sec] received
3.03 kb/s sent
4.72 kb/s total
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 302 1264 589.7 1112 3408
Waiting: 302 1264 589.7 1112 3408
Total: 302 1264 589.8 1112 3408
Percentage of the requests served within a certain time (ms)
50% 1112
66% 1415
75% 1647
80% 1785
90% 2143
95% 2402
98% 2741
99% 2865
100% 3408 (longest request)
after:
$ ab -p postdata.json -T application/json -c 10 -n 1000 http://127.0.0.1:1234/rpc/v1
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Server Software:
Server Hostname: 127.0.0.1
Server Port: 1234
Document Path: /rpc/v1
Document Length: 103 bytes
Concurrency Level: 10
Time taken for tests: 1.524 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 221000 bytes
Total body sent: 394000
HTML transferred: 103000 bytes
Requests per second: 656.21 [#/sec] (mean)
Time per request: 15.239 [ms] (mean)
Time per request: 1.524 [ms] (mean, across all concurrent requests)
Transfer rate: 141.62 [Kbytes/sec] received
252.49 kb/s sent
394.11 kb/s total
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 4 15 23.0 8 221
Waiting: 4 15 22.9 8 219
Total: 4 15 23.0 8 221
Percentage of the requests served within a certain time (ms)
50% 8
66% 10
75% 13
80% 16
90% 29
95% 49
98% 81
99% 131
100% 221 (longest request)