Caching GPU resources using their unique store indices as keys
Today, most of the compute power to render a frame goes into instantiating re_renderer primitives: deserializing all of the raw Arrow data that the store returns, and effectively turning it into actual GPU buffers that we can upload and render.
Querying the store and uploading the data to the GPU are actually fairly cheap: what's slow is to deserialize the raw component data, join everything together and finally build a set of primitives that the renderer can actually work with.
Since our datastore is an append-only and otherwise immutable datastructure (ignoring GC), it should be possible to maintain an LRU cache that maps a list of (Component, RowIndex) tuples to the resulting GPU buffer data.
This cache could then be re-used not only across frames, but also across views within a single frame:
- Query datastore, get list of
(Component, RowIndex)tuples - Check LRU cache using this list as key (and probably
EntityPathetc) - If it's a hit, feed that back to the renderer
- Otherwise, do the same as we do today and insert the result back into the LRU
This naturally handles out-of-order insertions as well as multi-threading due to everything being append-only. The race condition when reading/writing to the cache across different views should be irrelevant since the mapping is deterministic.
This also fits nicely with the ideas expressed in #426.
Questions:
- What about hover/picking and such? Prob need to cache some stuff for that too.
What about hover/picking and such? Prob need to cache some stuff for that too.
Once we move to a model where we re-query hovered/selected objects in order to render them again for outlines instead of checking the selected-ness while processing, this might be less of a concern