foundry icon indicating copy to clipboard operation
foundry copied to clipboard

feat(perf): batch `eth_call` requests in tests/scripts

Open mds1 opened this issue 2 years ago • 4 comments

Component

Forge

Describe the feature you would like

See https://github.com/gakonst/ethers-rs/issues/2508 for details/rationale on the request here

forge batching is not necessarily dependent on that ethers-rs feature, as even without it forge could presumably convert batches calls into multicalls. This could result in significant performance improvements for RPC-heavy scripts and fork tests

Currently instead of eth_call we simulate the call and use eth_getStorageAt as needed. This is suboptimal than just using eth_call directly because:

  • Alchemy prices eth_getStorageAt at 17 CUPS but eth_call is 26, so any tx reading 2+ slots is currently paying more (and running slower due to multiple requests) than necessary, especially when you consider that we can batch eth_calls but can't batch eth_getStorageAt
  • Simulating might not give the right result for chains where some opcodes behave differently than on mainnet (e.g. NUMBER returns L2 block number on optimism but L1 block number on arbitrum)

So I think the best path forward is:

  1. Replace the current behavior to use eth_call instead
  2. Then, batch eth_calls. The approach here would be:
  • Collect all consecutive staticcalls, stop collecting when there's a state changing operation
  • If Multicall3 is available on the chain, batch calls with it. If Multicall3 is not available on the chain, either use eth_call state overrides to place it there as part of the call, or just send normal requests without Multicall3

Additional context

No response

mds1 avatar Jul 12 '23 00:07 mds1

Another approach to this could be to use the standard JSON RPC batching—not all providers support this, and it also doesn’t result in reduced RPC usage, but it can still be useful for users using their own node or a provider that supports it. This would probably have to be opt-in as a result, whereas the approach described above can be abstracted from the user as the default

mds1 avatar Aug 31 '23 21:08 mds1

Perhaps this is a suggestion that can be implemented

https://github.com/foundry-rs/foundry/blob/529559c01fabad0e6316d605fd2c4326b8ad6567/crates/evm/core/src/fork/backend.rs#L326C61-L326C61

impl<M> Future for BackendHandler<M>
where
    M: Middleware + Clone + Unpin + 'static,
{
    type Output = ();

    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
        let pin = self.get_mut();


        let futures = pin.pending_requests.iter_mut().map(|request| {
            match request {
                ProviderRequest::Storage(fut) => fut.as_mut(),
            }
        }).collect::<Vec<_>>();

        let all_futures = futures::future::join_all(futures);

        match all_futures.poll_unpin(cx) {
            Poll::Ready(results) => {
                // ...

                if pin.handlers.is_empty() && pin.incoming.is_done() {
                    Poll::Ready(())
                } else {
                    Poll::Pending
                }
            },
            Poll::Pending => Poll::Pending,
        }
    }
}


JONEmoad avatar Nov 13 '23 09:11 JONEmoad

Another approach: assuming the RPC URL supports state overrides (like geth does), use Dedaub's storage extractor code to batch eth_getStorageAt: https://github.com/Dedaub/storage-extractor

mds1 avatar Feb 07 '24 21:02 mds1

In all this time, the issue has not been taken seriously. This is disappointing @mattsse @Evalir @gakonst @DaniPopes @onbjerg @klkvr

BABA3344 avatar May 24 '24 02:05 BABA3344