madara icon indicating copy to clipboard operation
madara copied to clipboard

dev: add LRU cache layer to block fetching client-side

Open EvolveArt opened this issue 1 year ago • 7 comments

/// Manage LRU caches for block data and their transaction statuses.
/// These are large and take a lot of time to fetch from the database.
/// Storing them in an LRU cache will allow to reduce database accesses
/// when many subsequent requests are related to the same blocks.
pub struct EthBlockDataCacheTask<B: BlockT>(mpsc::Sender<EthBlockDataCacheMessage<B>>);

This code was taken from frontier and is extensively used on their RPC layer. We should investigate adding this on our side and evaluate the performance gain using https://github.com/keep-starknet-strange/gomu-gomu-no-gatling

EvolveArt avatar Jul 31 '23 13:07 EvolveArt

As @tdelabro pointed me to this issue I can try to work on this for our RPC layer. Would be glad to try out performance testing as well.

tchataigner avatar Aug 30 '23 10:08 tchataigner

Yes sure. Try to draw inspiration from the frontier impl: https://github.com/paritytech/frontier

tdelabro avatar Aug 30 '23 13:08 tdelabro

Hey ! Sorry for not giving news before today, was taken out by a bad cold.

I have been wondering about a detail in the cache implementation which I think is still important to determine. In frontier implementation we can see that they have a max_size parameter which limits the cache total stored size to a given threshold.

However, this implementation has one flaw I think, it only looks at stored values size and not at the allocated size (e.g.: for a value "a", value size: 1 byte, allocated size: 148 bytes). In the context of P2P networks and precise memory management, I was wondering if I should:

  1. Keep the same memory management as frontier, which is imprecise but will allow for more values to be stored.
  2. Change the memory management to allocated memory, allowing for more precise memory predictability.
  3. Have a middle ground where either or both could be selected.

Eager to get your take on that :raised_hands:

tchataigner avatar Sep 04 '23 15:09 tchataigner

I think in the first time you can go with the easier one. And then if it's not too difficult switch to the second one which is more precise.

tdelabro avatar Sep 05 '23 09:09 tdelabro

There hasn't been any activity on this issue recently, and in order to prioritize active issues, it will be marked as stale. Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by leaving a 👍 Because this issue is marked as stale, it will be closed and locked in 7 days if no further activity occurs. Thank you for your contributions!

github-actions[bot] avatar Oct 06 '23 00:10 github-actions[bot]

@tchataigner has been working on l1-l2 messaging recently. If someone want to take the work for this issue where he left it, it would be great

tdelabro avatar Oct 09 '23 09:10 tdelabro

There hasn't been any activity on this issue recently, and in order to prioritize active issues, it will be marked as stale. Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by leaving a 👍 Because this issue is marked as stale, it will be closed and locked in 7 days if no further activity occurs. Thank you for your contributions!

github-actions[bot] avatar Nov 09 '23 00:11 github-actions[bot]

repository archived in favor of https://github.com/madara-alliance/madara

tdelabro avatar Aug 02 '24 18:08 tdelabro