madara
madara copied to clipboard
dev: add LRU cache layer to block fetching client-side
/// Manage LRU caches for block data and their transaction statuses.
/// These are large and take a lot of time to fetch from the database.
/// Storing them in an LRU cache will allow to reduce database accesses
/// when many subsequent requests are related to the same blocks.
pub struct EthBlockDataCacheTask<B: BlockT>(mpsc::Sender<EthBlockDataCacheMessage<B>>);
This code was taken from frontier and is extensively used on their RPC layer. We should investigate adding this on our side and evaluate the performance gain using https://github.com/keep-starknet-strange/gomu-gomu-no-gatling
As @tdelabro pointed me to this issue I can try to work on this for our RPC layer. Would be glad to try out performance testing as well.
Yes sure. Try to draw inspiration from the frontier impl: https://github.com/paritytech/frontier
Hey ! Sorry for not giving news before today, was taken out by a bad cold.
I have been wondering about a detail in the cache implementation which I think is still important to determine. In frontier implementation we can see that they have a max_size
parameter which limits the cache total stored size to a given threshold.
However, this implementation has one flaw I think, it only looks at stored values size and not at the allocated size (e.g.: for a value "a"
, value size: 1 byte, allocated size: 148 bytes). In the context of P2P networks and precise memory management, I was wondering if I should:
- Keep the same memory management as frontier, which is imprecise but will allow for more values to be stored.
- Change the memory management to allocated memory, allowing for more precise memory predictability.
- Have a middle ground where either or both could be selected.
Eager to get your take on that :raised_hands:
I think in the first time you can go with the easier one. And then if it's not too difficult switch to the second one which is more precise.
There hasn't been any activity on this issue recently, and in order to prioritize active issues, it will be marked as stale. Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by leaving a 👍 Because this issue is marked as stale, it will be closed and locked in 7 days if no further activity occurs. Thank you for your contributions!
@tchataigner has been working on l1-l2 messaging recently. If someone want to take the work for this issue where he left it, it would be great
There hasn't been any activity on this issue recently, and in order to prioritize active issues, it will be marked as stale. Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by leaving a 👍 Because this issue is marked as stale, it will be closed and locked in 7 days if no further activity occurs. Thank you for your contributions!
repository archived in favor of https://github.com/madara-alliance/madara