De-duplicate pubkey caches across `BeaconState`/`BeaconChain`
Description
Presently Lighthouse has two notions of a pubkey cache. There's the "global" cache attached to the BeaconChain, and "local" caches attached to each BeaconState.
Since in-memory tree-states, the local caches are stored using a persistent data structure, meaning that many beacon states can share most of the pubkey cache without duplication. The idea behind this issue is to extend this structural sharing to the global public key cache.
Implementation
One way to implement the cache would be for new beacon state caches to be initialised from the global cache, by cloning and making the necessary mutations. The persistent data structures (from rpds) then ensure that memory is shared
Another approach would be to have a global cache that is aware of changes to pubkeys over time, and can respond with results for different epoch values. The beacon states could contain an Arc<RwLock<..>> reference to this cache, and make queries. This approach might be more compatible with validator index reuse should it be implemented in future, as the global cache would never "forget" about old overwritten validators. This could be advantageous when reloading old states from disk, as the cache would already contain the relevant information and would not need to be rebuilt.
This approach might be more compatible with validator index reuse
I don't think we should consider index reuse for now. If it ever comes it will be in a long while. Beam chain might happen in the middle
Two separate cleanups:
- Remove
pubkey_bytesfield from global cache. It is rarely used and quite unnecessary because we can always load pubkey bytes from the head state, or load a PublicKey and quickly compress it. - De-dupe the global and state caches. Lion and I discussed this offline and settled on an approach where we remove the cache from the
BeaconState, and have a persistent cache that is cloned before block processing (keeps block processing lock free).
Lodestar hit an issue with Electra deposits which is worth keeping in mind when we refactor the cache:
- https://github.com/ChainSafe/lodestar/pull/7284#issuecomment-2547305198
The main thing to avoid is checking the global cache for the existence of a key that was added on a side-chain at a later epoch.
Takes ~800ms to build this cache on a state cache miss, so worth eliminating that to reduce the impact of state cache misses.
If we don't abolish the local pubkey caches, there's a micro-optimisation here that we could try:
- https://github.com/sigp/lighthouse/pull/7849#discussion_r2266650349