High performance `getledgerentry`
Description
Resolves #4306
getledgerentry core endpoint is now high performance, non blocking and served by a multi-threaded HTTP server that does not interact with the main thread. This enables down stream systems to query this endpoint at high rates without captive-core nodes losing sync. Note that this endpoint is served by a different port and separated from stellar-core's other endpoints. The following config options have been added supporting this feature:
RPC_HTTP_PORT = 11627 # default listening port
RPC_THREADS = 4 # default threads serving getledgerentry endpoint
RPC_SNAPSHOT_LEDGERS = 5 # default number of ledgers retained in history
The HTTP request string is as follows:
getledgerentry?key=Base64&ledgerSeq=NUM
key is required, and is the Base64 XDR of the LedgerKey being queried. ledgerSeq is optional. If not set, stellar-core will return the LedgerEntry based on the most recent ledger. If ledgerSeq is set, stellar-core will return an entry based on a historical ledger snapshot at the given ledger. The return payload is a JSON object in the following format:
"ledger": ledgerSeq, // Required
"state": ["not_found" | "live" | "dead"], // Required
"entry": Base64 // Optional
ledger is the ledgerSeq that the query is based on, and is always returned. state returns "live" if a live LedgerEntry was found, or "dead" if the LedgerEntry does not exist. Additionally, if ledgerSeq is set to a snapshot that stellar-core does not currently have, "not_found" is returned. Finally, if state==live, "entry" is returned with the Base64 XDR encoding of the full LedgerEntry.
To measure performance, I used a parallel go script (thanks @Shaptic) with stellar/go/clients/stellarcore to send requests at a very high rate over local host. 1 million LedgerEntries of type ContractCode, ContractData, and Trustline were randomly sampled from the BucketList (such that all entries exist) for these requests, and no caching was used. Test was ran on test-core-003a.dev.stellar002 with a captive-core instance in sync with pubnet with the following benchmarks:
Request Rate: 2731 / sec
total queries: 100910, failed: 0, success: 100910
success rate: 100.000000
average latency: 366.163µs
min latency: 256.861µs
Request Rate: 11836 / sec
total queries: 103390, failed: 0, success: 103390
success rate: 100.000000
average latency: 422.453µs
min latency: 204.412µs
Request Rate: 18245 / sec
total queries: 100740, failed: 0, success: 100740
success rate: 100.000000
average latency: 548.105µs
min latency: 214.905µs
Checklist
- [ ] Reviewed the contributing document
- [ ] Rebased on top of master (no merge commits)
- [ ] Ran
clang-formatv8.0.0 (viamake formator the Visual Studio extension) - [ ] Compiles
- [ ] Ran all tests
- [ ] If change impacts performance, include supporting evidence per the performance document
I've now added a batch load endpoint called getledgerentrybatch. This is a POST method and requires the following body:
ledgerSeq=NUM&key=Base64&key=Base64...
ledgerSeq is an optional value that follows the same semantics as the getledgerentry endpoint. It is followed by one or more key to be queried. The return value is a JSON payload as follows:
{
"entries": [
{"entry": "Base64-LedgerKey", "state": "dead"}, // dead entry
{"entry": "Base64-LedgerEntry", "state": "live"}, // live entry
],
"ledger": ledgerSeq
}
If a ledgerSeq is queried but is not available, the return payload is as follows:
{"ledger": ledgerSeq, "state": "not_found"}
I know this is just a prototype, but it would be more ergonomic to send JSON in the POST body..
Also, from the example response it seems like "entry" could point to both an entry or a key, I would suggest always providing a key and optionally providing an entry field which can be omitted, as follows:
{
"entries": [
{"key": "Base64-LedgerKey", "state": "dead"}, // dead entry
{"key": "Base64-LedgerKey", "entry": "Base64-LedgerEntry", "state": "live"}, // live entry
],
"ledger": ledgerSeq
}
Regarding:
{"ledger": ledgerSeq, "state": "not_found"}
Could you simply use a 404 HTTP status code instead?
Additionally, how can I distinguish TTL'ed entries? Are TTL'ed entries the ones with "dead" status? I will also need the entry body for TTL'ed entries.
To be clear, what I need is a way to implement SnapshotSourceWithArchive from the endpoint you provide.
Additionally, how can I distinguish TTL'ed entries? Are TTL'ed entries the ones with
"dead"status?I will need the entry for those as well.
If an entry has been evicted, it will be reported as DEAD. If the entry is expired but not evicted, it will be returned as LIVE. This is just a raw key-value lookup that doesn’t enforce TTLs. To determine if a key is dead or not, you’ll need to load both the entry key and the TTL key. Here, live means “the key exists on the BucketList” and dead means “key does not exist on the BucketList” and is unrelated to TTL. I believe this is the same interface as your get_including_archived.
Ah, ok, so live means not evicted (but possibly expired), and I need to query TTL entries separately (the more reason to have a batch endpoint)
If "dead" means not found, we can simply omit those entries and assume omitted entries where not found.
Then we can get rid of the state field altogether (since present entries are implicitly live)
What is the maximum number of ledger entry keys this endpoint could accept?
What is the maximum number of ledger entry keys this endpoint could accept?
I haven't tested the maximum entries for a single query. However I doubt it will be a limiting factor, given we achieved a request rate of 20k RPS with an average latency of 548.105 us for point loads, and bulk loads are more efficient.
@SirTyson We currently support 200 keys which is more than sufficient. Btw the current interface is here and I hope this is a non-breaking change for downstream clients: https://developers.stellar.org/docs/data/rpc/api-reference/methods/getLedgerEntries
@janewang this endpoint will not be directly exposed to clients. This is the backend that rpc will call, so if more encodings or other protocols/semantics need to be supported in clients this would be done in rpc not core.
LGTM, could you please rebase and squash?
LGTM, could you please rebase and squash?
Done
All your comments should be addressed @Shaptic