protocol
protocol copied to clipboard
Create a "viewer" contract that can fetch the entire on-chain transcoder pool
Currently, all clients interacting with the BondingManager need to submit N RPC requests in order to fetch the current on-chain transcoder pool (example from go-livepeer) where N is the size of the on-chain transcoder pool. Furthermore, oftentimes, clients need other on-chain data about a transcoder (i.e. active stake, total stake, service URI) in addition to the transcoder's address. At the moment, a client needs to send separate RPC requests to fetch this on-chain data. Reducing the # of RPC requests required for these situations would help clients that depend on rate limited ETH RPC providers (i.e. Infura) and would also reduce the execution time required to fetch relevant on-chain data about the transcoder pool [1].
One way to reduce the # of RPC requests in these situations could be to deploy a "viewer" contract. This contract would read data from the BondingManager and could batch together function calls that would otherwise need to be executed on the BondingManager individually into a single function call. Clients would then interact with this viewer contract instead of directly interacting with the BondingManager at least for the on-chain data that can be fetched via the viewer contract. To address the situations described above, the viewer contract could expose a function that loops through the transcoder pool and returns all relevant on-chain data for each of the pool addresses.
Here is an example of what the viewer contract might look like:
commit 05746368b85a22f3332cdc89d3bde9fb43fa7710
Author: Yondon Fu <[email protected]>
Date: Wed Jul 8 17:21:21 2020 -0400
Viewer WIP
diff --git a/contracts/Viewer.sol b/contracts/Viewer.sol
new file mode 100644
index 0000000..be7a680
--- /dev/null
+++ b/contracts/Viewer.sol
@@ -0,0 +1,86 @@
+pragma solidity ^0.5.11;
+
+
+contract Viewer {
+ struct Transcoder {
+ uint256 lastRewardRound;
+ uint256 rewardCut;
+ uint256 feeShare;
+ uint256 lastActiveStakeUpdateRound;
+ uint256 activationRound;
+ uint256 deactivationRound;
+ uint256 activeStake;
+ uint256 totalStake;
+ string serviceURI;
+ }
+
+ function getTranscoder(
+ IBondingManager _bondingManager,
+ IServiceRegistry _serviceRegistry,
+ IRoundsManager _roundsManager,
+ address _addr
+ )
+ public
+ view
+ returns (Transcoder memory)
+ {
+ (
+ uint256 lastRewardRound,
+ uint256 rewardCut,
+ uint256 feeShare,
+ uint256 lastActiveStakeUpdateRound,
+ uint256 activationRound,
+ uint256 deactivationRound
+ ) = _bondingManager.getTranscoder(_addr);
+
+ (
+ ,
+ ,
+ uint256 activeStake,
+ ,
+ ,
+ ,
+ ,
+ ,
+ ,
+ ) = _bondingManager.getTranscoderEarningsPoolForRound(_addr, _roundsManager.currentRound());
+
+ return Transcoder({
+ lastRewardRound: lastRewardRound,
+ rewardCut: rewardCut,
+ feeShare: feeShare,
+ lastActiveStakeUpdateRound: lastActiveStakeUpdateRound,
+ activationRound: activationRound,
+ deactivationRound: deactivationRound,
+ activeStake: activeStake,
+ totalStake: _bondingManager.transcoderTotalStake(addr),
+ serviceURI: _serviceRegistry.getServiceURI(addr)
+ });
+ }
+
+ function getTranscoderPool(
+ IBondingManager _bondingManager,
+ IServiceRegistry _serviceRegistry,
+ IRoundsManager _roundsManager
+ )
+ public
+ view
+ returns (Transcoder[] memory)
+ {
+ uint256 poolSize = _bondingManager.getTranscoderPoolSize();
+ Transcoder[] memory res = new Transcoder[](poolSize);
+
+ address addr = address(0);
+ for (uint256 = 0; i < poolSize; i++) {
+ if (i == 0) {
+ addr = _bondingManager.getFirstTranscoderInPool();
+ } else {
+ addr = _bondingManager.getNextTranscoderInPool(addr);
+ }
+
+ res[i] = getTranscoder(_bondingManager, _serviceRegistry, _roundsManager, addr);
+ }
+
+ return res;
+ }
+}
\ No newline at end of file
An additional function that could be useful could be one that accepts a list of addresses (instead of using the addresses in the transcoder pool) and returns all relevant on-chain data for each address. This function could be used by a client that needs to fetch the stake for multiple addresses at a regular interval (i.e. at the beginning of a round).
[1] Batching operations into a single function call will reduce the # of RPC requests, but it will increase the amount of steps executed in the EVM. My guess is that the overhead from EVM step execution will be less than the execution time saved by not submitting multiple RPC requests, but we should validate this. We should also benchmark the gas cost of the functions exposed by the viewer contract to make sure that it is below the gas cap for eth_call
imposed by certain RPC providers such as Infura.
The main issue I see with this is that different providers can have different eth_call
gas limits. For example infura has a pretty low one of 20-ish million.
As long as we can stay under that we should be fine though for most users.
Good point about the gas cap on eth_call
. Updated the OP to note that we should benchmark the gas cost of the batch functions exposed by the viewer contract.
One other point to note is that the bottleneck now becomes EVM execution speed instead of RPC calls.
I have written a Viewer contract just now which we could use for getting the stakes and the transcoder pool.
I still have concerns about gas limits on calls from several RPC Providers on aggregating data from several different functions for 100 orchestrators.
Now that the ServiceURI
is included in the subgraph Transcoder
entity, we could also use the subgraph to query the transcoder pool. The downside here is that the subgraph service itself hasn't been the most reliable and is sometimes unaccessible which could affect a node's ability to start up. WDYT ?
For local setups we can still use regular RPC calls if no subgraph is defined on startup.
I quickly prototyped something up , tested in manually with a mainnet broadcaster node.
It's an extensible subgraph client although it currently only implements a single function.
https://github.com/livepeer/go-livepeer/tree/nv/subgraph-transcoderpool
Misses unit tests
I do see reasons why we'd want a viewer contract instead though so interested in what your opinion is here.
Some initial benchmarking results using the mainnet orchestrator pool. This data is for fetching the on-chain data for all transcoders in the TranscoderPool
, it does not include separately querying each transcoder for its offchain data.
Speed
baseline (current release)
- Time fetching TranscoderPool 31.801603581s
subgraph
- Time fetching TranscoderPool 0.242s
Viewer contract
- Time fetching TranscoderPool 0.89s
The subgraph seems to win out in speed as a "speed up option"
Integration
subgraph
- Maintained by Livepeer Inc regardless
- Simple HTTP Client (which could be used for additional features going forward)
- Requires additional flag
Viewer contract
- Requires code maintenance for the Viewer contract and the Client integration
- Requires a newly deployed contract if we want to add / make changes
- Current version of abigen can't determine naming for auto-generated structs when ABIEncoderV2 is used, have to manually rename or put the contracts in seperate packages
This is slightly opinionated but I'd say the subgraph integration also wins in this category
Other considerations
Subgraph
- Is a 3rd party service, not on-chain data
Viewer contract
-
10 million gas (9462655) for the
eth_call
to get the transcoder pool. This is OK for infura but unclear how well this works with other providers or self-hosted nodes (can't find any immediate info for geth)
The current gas for the eth_call
would be okay as it's barely under the block gas limit, however if we were to increase the transcoderpool size it is uncertain how this will will affect usage with services other than infura, or self-hosted ethereum nodes.
From the benchmarks as well as "other considerations" the best way to achieve a direct speed-up is to use the subgraph which gives a 100x reduction in speed fetching the transcoder pool.
The current prototype makes usage of the feature optional (when providing the "subgraph" flag) and when the subgraph is unavailable we can still use good old RPC calls to start the node.
We can still use a viewer contract for other solutions (such as caching stakes each round) to reduce RPC calls however I think we can also use the subgraph's Pool
entity to accomplish this.
Thus unless you have other objections @yondonfu , I think it makes sense to continue with the subgraph integration; adding unit tests for the current functionality and scoping fetching stakes upon round init into a different issue that would also use the subgraph.
As mentioned during the planning meeting. The subgraph and a viewer contract aren't mutually exclusive features.
The workflow would be
- Try fetching
TranscoderPool
using subgraph (ifsubgraph
flag is specified on node startup) - On failure make RPC calls just as before
- Return
TranscoderPool
A viewer contract could easily be fit in here as well:
- Try fetching
TranscoderPool
using subgraph (ifsubgraph
flag is specified on node startup) - On failure use viewer contract to fetch
TranscoderPool
- On failure of viewer contract (e.g. gas exceeds call limit) , execute RPC calls as before.
- Return
TranscoderPool
For now though if the subgraph seemingly works well as an optional feature to speed up the node's operations I deem that to be sufficient. The subgraph
flag makes it opt-in for users to use this hosted service.