Optimizing archive node genesis account function
Optimizing archive node genesis account function. Function tries to inserts all accounts specified in genesis ledger config file. However, it does it in non optimal way. Namely it open transaction at start then tries to add all accounts and finally commits. This can lead to OutOfMemory issues in postgres process as for instance mainnet genesis ledger contains 200k+ accounts. We experienced that problem on our performance environment. Solution is to make more granular commits during import. New code chop account to be inserted list into 100 account batches (this is measured value as 500 was also making postgres memory increase till 40~gb)
!ci-docker-me
!ci-docker-me
!ci-build-me
!ci-build-me
!ci-nightly-me
nightly: https://buildkite.com/o-1-labs-2/mina-end-to-end-nightlies/builds/3326
!ci-build-me
Is it possible for us to have a formula instead of using magic numbers?
Regarding magic number, do you mean that chunks length is magic number :
let chunks_length = 100 in
? or something else too?
Regarding magic number, do you mean that chunks length is magic number :
let chunks_length = 100 in? or something else too?
Yes, the chunks_length. I assume you could in theory calculate the bound based on available memory & size of each record?
But if this is too complicated a formula in the end, I'm fine leaving this as is. As I understand that sometimes it's hard to actually do the formula.
I think i can extract it to parameter with default value. what you think? Calculating its value based on resource is nice, but i think it should be done by some resources monitor tool outside archive process which will feed this value. Exposing it should be enough from archive perspective
Make sense, please do :)
!ci-build-me
!ci-build-me
!ci-build-me
!ci-build-me