stacks-core icon indicating copy to clipboard operation
stacks-core copied to clipboard

Nakamoto: Signature concatenation malleability

Open kantai opened this issue 1 year ago • 7 comments

In the v0 signer signature implementation, the concatenated signature is vulnerable to malleability: a 100% participation signature could be transformed into a 80% participation signature by just dropping some of the signatures.

After some consideration on this morning’s nakamoto calll, the conclusion was that this is okay: the BlockHeaderHash incorporates the signature, so it commits to a single signature vec. The subsequent block (and its signer set) commits to that BlockHeaderHash. This leaves the chain tip with multiple valid siblings— but this is okay because the actual block contents are in all cases identical (and these siblings would have different BlockHeaderHashes, so the network stack and block processing should not be impacted).

I think that this means that there is no necessary implementation change, but I’m opening this issue so that there’s a place to think more about this.

kantai avatar May 21 '24 15:05 kantai

There is one consideration I've thought of. A malicious node could serve multiple different (and ultimately not confirmed) siblings of the chain tip to a victim node, and thus prevent it from learning the tip. This is probably inconsequential though, since the node will eventually learn the block after it gets confirmed.

On Tue, May 21, 2024, 11:04 AM Aaron Blankstein @.***> wrote:

In the v0 signer signature implementation, the concatenated signature is vulnerable to malleability: a 100% participation signature could be transformed into a 80% participation signature by just dropping some of the signatures.

After some consideration on this morning’s nakamoto calll, the conclusion was that this is okay: the BlockHeaderHash incorporates the signature, so it commits to a single signature vec. The subsequent block (and its signer set) commits to that BlockHeaderHash. This leaves the chain tip with multiple valid siblings— but this is okay because the actual block contents are in all cases identical (and these siblings would have different BlockHeaderHashes, so the network stack and block processing should not be impacted).

I think that this means that there is no necessary implementation change, but I’m opening this issue so that there’s a place to think more about this.

— Reply to this email directly, view it on GitHub https://github.com/stacks-network/stacks-core/issues/4810, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADQJK7BSGU4DVS22LSOSSDZDNPBRAVCNFSM6AAAAABIBZCIS6VHI2DSMVQWIX3LMV43ASLTON2WKOZSGMYDQNJSGIYDONA . You are receiving this because you are subscribed to this thread.Message ID: @.***>

jcnelson avatar May 21 '24 15:05 jcnelson

Yep, I agree— I think that issue ends up being a fairly limited one: the victim node would still have the same chain state view (because the actual block contents are identical) during the attack, and then, as you said, they’d eventually receive the real block when the next block is produced anyways.

kantai avatar May 21 '24 16:05 kantai

Hmm, there might be something here. If there's N signers, and (for simplicity) suppose K < N of them are sufficient to sign the block, then a malicious node might be able to force the victim to process O(N - K) blocks by serving them every possible valid signature combination. I suppose the victim could defend itself by refusing to process the same block sans signatures more than twice (once when it's unconfirmed, and once when it's confirmed), but that might itself be too constraining because siblings can arise naturally in the presence of flash blocks.

jcnelson avatar May 21 '24 16:05 jcnelson

I think the fix for this would be to only accept a sibling block that was otherwise identical if it had more signatures than the last-accepted sibling. Then, an honest neighbor could stop a malicious neighbor simply by supplying a block with more signatures (forcing the malicious node to come up with a block with even more signatures if it wanted to compel the victim to do needless work)

jcnelson avatar May 21 '24 17:05 jcnelson

I suppose the victim could defend itself by refusing to process the same block sans signatures more than twice (once when it's unconfirmed, and once when it's confirmed), but that might itself be too constraining because siblings can arise naturally in the presence of flash blocks.

I think this might be the right thing to do. Siblings due to flash blocks would have different signer signature hashes (because they'd be in different tenures), so I'm not worried about that constraint.

In essence, the implementation of this would be something like:

  1. The Nakamoto block headers table would have a signer_sig_hash field.
  2. In consider_next_block, it would check:
if NakamotoChainstate::has_processed(block.header.signer_sig_hash()) {
   // the pseudocode of `get_staging_blocks` by a chain_length + 1 is hiding *some* complexity
   //  this needs to be a Vec result here (because of bitcoin forks or the possibility of small tenure forks) 
   let potential_child_blocks = NakamotoChainstate::get_staging_blocks(block.header.chain_length + 1);
   if !potential_child_blocks.iter().any(|child_block| child_block.header.parent_block_id == block.block_id()) {
       // do not consider this block for processing yet because we've previously processed this signer sig hash
       //  and no children build off of this hash.
   }
  1. After a block is no longer on the chain tip, any non-confirmed siblings should be garbage collected (eagerly!)

kantai avatar May 22 '24 00:05 kantai

When downloading unconfirmed blocks, the downloader just needs to check self.tip for "updating" from peers. E.g.,

Alice: I have blocks up to height n, what do you have?
Bob: I have blocks up to height n+10
Alice: Give me blocks [n, n+10]
Bob: Here are [n, n+10]
Alice: My `n` doesn’t connect to `n+1`, but Bob’s `n` does, so store Bob’s `n`

kantai avatar May 29 '24 14:05 kantai

Writing down the conclusion of a huddle:

The endpoint GET /v3/tenures/{remote_tip_block_id}?stop={local_tip_block_id} works by requesting a range of unconfirmed tenure blocks ending (inclusively) with remote_tip_block_id (which is learned from a call to GET /v3/tenures/info), and starting (exclusively) with local_tip_block_id. This endpoint and associated chainstate need to be updated in three ways.

First, this endpoint must include the signer sighash of local_tip_block_id, so the remote node can (1) determine if the requester has the same view of the same fork even if the local_tip_block_id does not match any known block, and (2) determine if the requester has a different view of the signer signatures for local_tip_block_id so it can serve back at least the signatures for that block. The requesting node needs to handle the signature data for the block pointed to by local_tip_block_id by synthesizing a new block with these new signatures, but with the same data and header. Then, both nodes will have the same value for local_tip_block_id referring to the same block, as well as all blocks built atop it.

Second, remote_tip_block_id needs to be the block's signer sighash, not the block ID. This is because the signature data is not yet confirmed by signers, so it's possible that the block ID for this block can change in-between the call to GET /v3/tenures/info and GET /v3/tenures/{remote_tip_block_id}. By using the sighash value for remote_tip_block_id, instead of the block ID, we avoid having to deal with this race condition.

Third, the database schema for blocks should index block data by signer sighash in addition to block ID. Multiple block IDs can point to the same sighash, and each block ID would be associated with a unique set of signatures. Then, the block data and header (sans signatures) are stored at most once, but the node can synthesize a block from any signature set it has received. Multiple signature sets may be stored, but that's okay -- they would remain valid tips to build upon should signers require it.

jcnelson avatar May 29 '24 15:05 jcnelson

This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

blockstack-devops avatar Oct 27 '24 00:10 blockstack-devops