go-algorand
go-algorand copied to clipboard
[API][WSS] Optimize or limit status/wait-for-block-after/{round}
Status
The GET v2/status/wait-for-block-after/{round} endpoint is commonly used every round by API customers in order to get the next block as soon as it is available in the node. This typically is fine, but, the recent Goracle testing revealed that under heavy load it is non-performant and at all times it is non-cachable, always requiring a user to pass through to the server if requested prior to the block being inserted in the ledger.
Expected
The endpoint does not consume resources in excess of what the instance has available under load and crash or restart algod.
Solution
One solution suggested is to allow the node runner to cap the number of allowed users who will be able to queue to request the block.
An alternate would be to track all the users requesting the same bloc and answer them in batch.
Ideally, however, this use case is matched by a subscription service, allowing users to request a constant stream of blocks as they are produced and not requiring them to make a request per block. This is a popular option in other chains, using JSON RPC over websockets.
Dependencies
Urgency
Not especially urgent, although the experience of the Goracle testing does merit some concern.