snarkOS icon indicating copy to clipboard operation
snarkOS copied to clipboard

[Feature] Store batch proposal to disk on shutdown

Open raychu86 opened this issue 11 months ago • 8 comments

🚀 Feature

Store the current proposal to DB on shutdown.

Currently when a node shutdowns/reboots while a batch proposal is still in progress, the proposal is dropped. Once the node restarts, it will need to generate a new proposal for the same round. Other nodes who see this new proposal will think the node is being malicious because each validator can't have more than one proposal per round.

This is really only meaningful if the other nodes have not progressed passed this round between the shutdown and reboot.

The solution is to store the pending batch proposal to disk on shutdown and reload it on bootup.

raychu86 avatar Mar 15 '24 00:03 raychu86

Another, more resilient but slower solution is to persist the pending batch proposal to disk before it is broadcasted (we cannot guarantee a clean shutdown, namely when being SIGKILLed from outside the process for various reasons).

vvp avatar Mar 15 '24 15:03 vvp

You'd want to also store the signatures that you've received for that proposal, because peers won't resign a proposal they've already seen. So just storing a proposal before it is broadcasted is not enough, you'd likely have to do it every time you receive a new signature as well.

raychu86 avatar Mar 15 '24 15:03 raychu86

Happy to tackle this one, just a few questions:

  • could we persist the proposal to a file instead? it's probably better suited for this use case
  • what should we do in case of a hard/unclean shutdown, then?

ljedrz avatar Mar 20 '24 12:03 ljedrz

@ljedrz

  1. Preferably the proposal goes to the same DB, in case the validator needs to migrate machines or directories. So a file is not ideal. We could store it as a blob in rocksDB.

  2. I believe hard/unclean shutdowns could cause DB corruptions, so i believe this sufficient enough to be best effort.

raychu86 avatar Mar 29 '24 22:03 raychu86

  1. Preferably the proposal goes to the same DB, in case the validator needs to migrate machines or directories. So a file is not ideal. We could store it as a blob in rocksDB.

I've taken an initial stab at it, and I have some issues with this approach:

  • a single-entry blob is somewhat odd in our DataMap-first setup; while not impossible to implement that way, I'd consider it "clunky"
  • trying to use a Proposal in the storage-service crate introduces a cyclic package dependency
  • if a validator migrates machines or directories, it would also migrate their ~/.aleo folder, so any file stored there wouldn't get lost; the storage itself is kept there after all
  1. I believe hard/unclean shutdowns could cause DB corruptions, so i believe this sufficient enough to be best effort.

That shouldn't happen by design, because we're using atomic writes; we would have to deviate from that in order to cause a storage corruption.

Summing up: while it is possible to go around the cyclic dependency, it would either require a refactoring effort across a few crates, or a conversion between 2 different Proposal objects. I would still recommend using a single file instead.

Your call @raychu86.

ljedrz avatar Apr 01 '24 10:04 ljedrz

The DataMap approach is still preferred for a unified storage.

We used to use the Blob (Vec<u8>) approach when storing rejected transactions - https://github.com/AleoHQ/snarkVM/commit/fa723a64b0a92afc48949595efef7c0618801bf3#diff-fd03badfe6a0b2934f728aec159249acfe3d2f4218f062f88edbb4191d28942cL87.

raychu86 avatar Apr 01 '24 16:04 raychu86

Ah yes, the value is not an issue, it's more the key that is not really meaningful for a single blob, and while it can be worked around, my only ideas are either things that would be weird at first glance (e.g. using the unit () as the key, or introducing a dedicated placeholder object); the most structured approach would be to introduce a single-entry object on the level of the DataMap (e.g. DataEntry) to handle cases like this.

ljedrz avatar Apr 01 '24 16:04 ljedrz

To share additional context on this topic:

For context, during the ZKSecurity audit of the BFT, the priority was safety over liveness. When turning off a node, your mempool is cleared, and the batch proposal (in memory) is also cleared. However, the other validators remain online and retain a copy of your validator’s batch proposal. When your validator comes back online, its batch proposal will no longer match (for the same round), and the other validators will not sign. Again, this is to prioritize safety over liveness.

Now, there are 2 practical ways to resolve this:

  1. Set a timer expiration on other validators` batch proposals. (tradeoff: liveness)
  2. Have your validator store your batch when shutting down, and rebroadcast your saved batch proposal when you come back online. (tradeoff: staleness)

howardwu avatar Apr 02 '24 16:04 howardwu

@raychu86 can this be closed?

vicsn avatar Jun 29 '24 15:06 vicsn