rust-web3
rust-web3 copied to clipboard
Async web3::contract::Contract::query for Batch Transports
Hi, before v0.14.0 one could call something like let _ = contract.query(...); for contracts that had batch web3 transport to queue the request that was later sent with submit_batch(). But, if I understand correctly, the new async version of web3::contract::Contract::query wouldn't do anything unless it was called with web3::block_on. And even then, the return value of the query function is Result<R: Detokenize>, and it doesn't seem clear what that should be in such a case, since the actual response is received only later when submitting the batch.
I did some testing and indeed, using batch transport with contracts seems to be broken in v0.14.0. The issue is that all contract functions are now async, so they don't get executed at all to push the request on batch list. If you block on them then you have another problem: you can't execute them later to collect results after submitting the batch. The workaround is to rewrite the functions to be only partially async, so the first part gets executed - eth.call() will push the request on batch list - and only then the future is returned. For web3::contract::Contract::query this workaround works for me:
/// Call constant function
pub fn query<R, A, B, P>(&self, func: &str, params: P, from: A, options: Options, block: B) -> impl Future<Output = Result<R>> + '_
where
R: Detokenize,
A: Into<Option<Address>>,
B: Into<Option<BlockId>>,
P: Tokenize,
{
let result = self.abi
.function(func)
.and_then(|function| {
function
.encode_input(¶ms.into_tokens())
.map(|call| (call, function))
})
.map(|(call, function)| {
let call_future = self.eth.call(
CallRequest {
from: from.into(),
to: Some(self.address),
gas: options.gas,
gas_price: options.gas_price,
value: options.value,
data: Some(Bytes(call)),
},
block.into(),
);
(call_future, function)
});
async {
let (call_future, function) = result?;
let bytes = call_future.await?;
let output = function.decode_output(&bytes.0)?;
R::from_tokens(output)
}
}
This fix keeps the function signature unchanged, but ideally the returned future would be wrapped into result, so any ethabi errors regarding function name and encoding input would be returned right away, not as part of the future.
Why not just spawn & forget the future after you receive it? With async-await support I'm a bit hesitant to return complicates things like Result<impl Future<Output=Result<_, _>>, _> cause it feels very error prone.
You cannot spawn and forget. If we take a look at examples/transport_batch.rs , it uses the following batch pattern:
let accounts = web3.eth().accounts();
let result = web3.transport().submit_batch().await?;
let accounts = accounts.await?;
This example works well, however it doesn't work anymore if you replace eth.accounts() with contract.query() or any other contract web3 function call. The difference is contract.query() is fully async and no part of the code gets executed until await is called. But the first part should be executed in order for the batch pattern above to work.
Regarding complicated things like Result<impl Future<Output=Result<_, _>>, _>, we can settle on impl Future<Output = Result<R>> + '_ or go back to QueryResult that was used in previous versions. Either way, it cannot remain as it is, because right now all batch calls with contracts are broken.
@mdben1247 I see, indeed that's a serious problem I haven't thought of it and we don't really have a detailed test case for this. I'll try to fix this over the weekend.
@tomusdrw Thanks. As proposed in my query() workaround above, all contract async functions could be made partially async, then the batch pattern works. However, I don't consider myself enough of an expert here, I am not sure whether this is really the best solution.
This should be relabeled as a bug: async contract functions do not work properly when using batched web3 transport. If the proposed fix above seems ok, I can prepare a PR. But I am not completely confident this is the proper way to do it.
@mdben1247 totally agree, apologies for the delay. Happy to review the PR, I didn't wrap my head around it just yet (lack of time).