relayer
relayer copied to clipboard
CosmosChainProcessor - dynamic block time targeting
Adds block query targeting mechanism, accounting for clock drift and variable RPC node time from the block timestamp until blocks are ready to be queried. This removes the fixed 1 second minQueryLoopDuration
to reduce queries by holding a rolling average of the delta time between blocks. This will allow it to target block query times on chains with different consensus timeouts (block times). In addition, it compares the block timestamp against the timestamp when the queries were initiated. It holds a clock drift parameter for fine tuning the time when the next block queries will be initiated. This reduces the queries on the nodes, cleans up the logs, and optimizes so that it can capture blocks as soon as they are ready to be queried.
One additional question, is there ever a risk of missing a block? E.g. Waiting too long?
One additional question, is there ever a risk of missing a block? E.g. Waiting too long?
If we wait too long, it will see that multiple blocks need to be queried and query them in sequence. It always starts at persistence.latestQueriedBlock + 1
, and persistence.latestQueriedBlock
is only updated when a block is successfully processed.
I also don't think "clock drift" is the correct description of what is being modified here, because we are not trying to synchronize a local wall clock with a remote one, IIUC -- rather we are trying to match an irregular timing to an unpredictable remote state. Maybe "backoff" would be slightly more accurate?
It's a trim value that is the sum of both clock drift, comparing the block's consensus timestamp against local machine time, and the variable amount of time from then until the block is ready to be queried. So yes maybe backoff
or timeTrim
would be a better name.
Was just talking to @boojamya about updating clients on channels w/ very little traffic today (i.e. make sure the client gets updated at least once per trusting period) and we had a need for estimated block time. Are we persisting this anywhere? Also this is a cool feature.