solana icon indicating copy to clipboard operation
solana copied to clipboard

Set cap for total accounts data size

Open brooksprumo opened this issue 2 years ago • 7 comments

Overview

The data for accounts is currently unbounded and validators store this on disk (or in tempfs). The Solana minimum validator specs should factor in for the maximum amount of account data allowed.

Note, this work is a stop-gap to ensure the network doesn't fall over under extreme accounts data size usage. There is on-going work to tackle this in a long-term way, likely based on economics. Please refer to code/issues/PRs about the compute budget, and dynamic rent.

Design

Set a new constant for the maximum accounts data size

  • Set to 128 GB
  • Should this be just the on-chain data, or also the practical actual size on disk for a validator? See the Questions section below.

Make transactions fail if they exceed the max accounts data size

If a transaction attempts to allocate more data such that it would exceed the maximum allowed accounts data size, then the transaction shall fail. As far as I know, this only affects creating accounts, and reallocating accounts.

  • Need to check instructions that create new accounts/reallocate against the remaining about of account data size
  • Add Account Data Budget/Meter to InvokeContext
  • Handle CPI
  • Since transactions are processed in parallel, and we need transaction results to be deterministic, this size checking will be done at the block level.
  • Replay Stage will fail blocks that exceed the limit
  • Banking Stage will drop transactions that would exceed the limit

Track total account data size in Bank

  • Something needs to provide InvokeContext with the values for Accounts Data Budget, and probably Bank is the right spot
  • Add a accounts_data_len field to Bank
  • Update the total account data size after processing transactions
  • When making a new Bank from a parent, copy the parent bank's total account data size
  • Need to handle non-transaction operations that cause accounts to be cleaned up (i.e. go to zero lamports and thus decreases the accounts data len). This is Rent collection.
  • May need to store an initial accounts data len and a delta

Compute total account data size when loading from a snapshot

  • The first bank needs to have a correct value for its total account data size
  • When loading from a snapshot, this needs to be computed at this time
  • If loading from Genesis, I assume this can be zero
  • Alternative: store the Bank's accounts_data_len in the snapshot, and then that'll be used when loading from the snapshot.
    • What if the snapshot is from before this feature is activated? We'd still need to get the total accounts data size from somewhere.

Feature gate

  • Add a feature gate around this work
  • Tracking the feature gate in #24135

Questions

  • [x] ~Should this be feature-gated?~
    • A: Yes
  • [x] ~Deterministic seed/starting value. When this feature is enabled, we need to ensure that the whole cluster has the same value for the total accounts data size in the current bank. How should that be done?~
    • Is it guaranteed that computing this value at startup is deterministic? If so, then that should work.
    • If the above does not work, an initial constant would work. Since the accounts_data_len already doesn't track everything that a validator holds on disk, this is used as an approximation. So I'd say that using a value that's close should also work.
    • A: Will use a starting/seed value at feature activation
  • [ ] Should the account storage overhead factor into the max accounts data size?
    • Since new accounts without data still consume on disk resources, this might be important. It'll impact computing and tracking the total account data size though.

Tasks

  • [x] #21735
  • [x] #21757
  • [x] #21781
  • [x] #21807
  • [x] #21813
  • [x] #21994
  • [x] #22412
  • [x] #22918
  • [x] #22986
  • [x] #23048
  • [x] #23422
  • [x] #23537
  • [x] #23714
  • [x] #23730
  • [x] #25256
  • [x] #25843
  • [ ] #26439
    • [x] #26772
    • [x] #26776
    • [x] #26773
    • [ ] #26744
  • [ ] #23744
  • [ ] #23717
  • [ ] #23759
  • [ ] Enable the feature
  • [ ] After enabling the feature, remove those parts from the code

brooksprumo avatar Dec 03 '21 21:12 brooksprumo

@CriesofCarrots @t-nelson @sakridge @carllin @jackcmay @jeffwashington Per our call earlier, does this look right for what to do?

brooksprumo avatar Dec 03 '21 21:12 brooksprumo

  • MAX_TOTAL_ACCOUNT_DATA_SIZE could be placed in ComputeBudget
  • Yes this work needs to be featurized

jackcmay avatar Dec 03 '21 22:12 jackcmay

Alternative: store the Bank's accounts_data_len in the snapshot, and then that'll be used when loading from the snapshot.

@brooksprumo well, if accounts_data_len can be computed at the startup time, why bother to add new field to the snapshotted bank? as you know already, changing snapshot format will incur somewhat large labor..

ryoqun avatar Jan 14 '22 13:01 ryoqun

i know. i'm very late to comment at this moment... could you polish up your (already well-written) issue description a bit, regarding the following questions?

Make transactions fail if they exceed the max accounts data size

so, if the cluster reaches to the 128GB of account data size as a whole, the cluster basically bricked? i think on-chain data only does increase over time like with other online platforms.

it might be preferable more nuanced preventive measures are taken like setting-up alerts, increase rent exponentially to the 128GB hard cap, pre-allocating the 128GB on disk or implementing stale account data eviction mechanism?

ryoqun avatar Jan 14 '22 13:01 ryoqun

@ryoqun

if accounts_data_len can be computed at the startup time, why bother to add new field to the snapshotted bank? as you know already, changing snapshot format will incur somewhat large labor..

This was suggested by @sakridge to store the accounts_data_len in the snapshot instead of calculating it during startup. I just want to make sure that cluster-wide every node gets the same value of accounts_data_len for a given (rooted) bank, regardless if the node is just booting up, or if it's long-running. I'm not sure which is better—add field to snapshot, or compute at startup—I'm happy to go with the better method here. Is calculating at startup preferred in your opinion?

if the cluster reaches to the 128GB of account data size as a whole, the cluster basically bricked?

No. Only transactions that increase accounts data size with fail, but all other transactions will be unaffected.

it might be preferable more nuanced preventive measures are taken like setting-up alerts, increase rent exponentially to the 128GB hard cap, pre-allocating the 128GB on disk or implementing stale account data eviction mechanism?

I agree. I'll make the issue more clear that this is a stop-gap for the better preventative measures that you've outlined. Dynamic rent (#21348) is another issue I'm looking at. I like the ideal of setting up alerts; I'll look into adding that. Same with pre-allocating.

or implementing stale account data eviction mechanism?

My understanding is that this is what Rent is for. I'm not sure there's an acceptable way to evict rent-exempt account data, right? Happy to hear about use-cases for this though!

brooksprumo avatar Jan 14 '22 14:01 brooksprumo

if accounts_data_len can be computed at the startup time, why bother to add new field to the snapshotted bank? as you know already, changing snapshot format will incur somewhat large labor..

More info: I just ran some builds on my GCE validator against mnb to see if the computed accounts data size from loading a snapshot matched regular replay for a bank at the same slot. The two values were different. I'm not sure yet why, but that is the reason why having the accounts data size in the snapshot would be needed.

brooksprumo avatar Jan 17 '22 23:01 brooksprumo

thanks for more details.

Is calculating at startup preferred in your opinion?

yeah, but not strong opinion. the role of accounts_data_len would be like the snapshot hash. ideally, this should be able to calculate for any root. i'm a bit worried about silently introducing some discrepancy bug. as you just observed (https://github.com/solana-labs/solana/issues/21604#issuecomment-1014956433)

i think bank (snapshot) field should be avoided if possible. also, a small sysvar might be desired if unavoidable for the ultimate safe snapshot thing (pending for toooo long....) https://github.com/solana-labs/solana/pull/6936

ryoqun avatar Jan 20 '22 12:01 ryoqun

Closing this PR as it cannot be merged. It would introduce potential non-determinism. For more information, please refer to https://github.com/solana-labs/solana/issues/27029

brooksprumo avatar Feb 10 '23 22:02 brooksprumo