engine
engine copied to clipboard
Implementation page-like system on rest api
Implement a system that can return only a small part of all the available.
Because the engine is using a key value database and we use the generated hash as key, the data added to the dabatase are not sorted by order of add. We'll need to implement a system to guarantee the order of add, it could be an incremental index that ref the hash.
Issues/PR that might be useful to add this pagination
https://github.com/cosmos/cosmos-sdk/issues/5420 https://github.com/cosmos/cosmos-sdk/pull/5405 https://github.com/cosmos/cosmos-sdk/pull/5435 https://github.com/cosmos/cosmos-sdk/pull/4658
@krhubert find an paginated iterator on the kvstore: https://github.com/cosmos/cosmos-sdk/blob/master/store/types/iterator.go
Summary of discussion with @krhubert :
We need to implement a new index system in order to add in a chronological and deterministic way data to the store.
The proposed solution is to use a simple incremental index.
Each kvstore will have the following:
index -> value (key prefixed with index_
)
hash -> index (key prefixed with hash_
)
and a unique key counter
containing the number of elements in the store
So for example with 2 elements of value, A and B, we will have in the store:
Key | Value |
---|---|
index_0 | A |
index_1 | B |
hash_HASH_OF_A | index_0 |
hash_HASH_OF_B | index_1 |
counter | 2 |
This way, iterators can iterate on index_XXX
in chronological order.
And to get the value from a hash, 2 reads will be required: hash_HASH_OF_A
-> index_0
-> A
.
@krhubert make sure to check https://github.com/cosmos/cosmos-sdk/pull/4658/files#diff-47df39648f34971e27a388de405b8ffc it seems there is already some kind of pagination system in place. But I suggest implementing the pagination directly in the keepers' method rather than returning ALL data and then filter them in the querier methods.
I don't really like the idea of mixing index database and data. I like the idea of an index database tho. The only thing is that I would do it externally and not in the keeper for a few reasons:
- No extra cost when storing the data. Adding an index will increase the cost of a transaction (quite negligible but still to consider)
- Possibility to disable the index easily. Indexing data is not necessary for all nodes, a validator, for example, might not need that, a runner as well etc... By having the index database separated we don't force to always index data.
- What if we change the way to index to change the type of pagination or anything? then we break the keeper. If we have the database outside, we just need to reindex without altering the current data.
- Indexes in databases can be costly when writing and also here this means more data to synchronize.
I would recommend having a separate key-value database that is only on the client, nothing on the keeper/blockchain. This way we can have a global index database (as tendermint is doing) where we can even put all the resources in that index database.
- service_0: hash1
- service_1: hash2
- process_0: hash3
- process_length: 1
- service_length: 2
We could populate this index based on the end block and iterate over all the tx/msgs to add/remove indexes. Have a look at the tendermint implementation for this indexer database https://github.com/tendermint/tendermint/blob/c4f7256766fd4cd46ac89c5259e77fe5b0a0bf45/state/txindex/indexer_service.go#L33
After some discussion with Antho, I will implement a very simple system that filter and sort (when possible) resources in the http and cmds functions