Add differential fuzzing for libsecp256k1 and decred secp256k1
ref: #331
Add differential fuzzing between the libsecp256k1 (C) and Decred secp256k1 (Go) libraries. This initial implementation covers commonly used functions such as private to public key operations, signing (compact and DER formats), and ECDH operations. The approach can be extended further to include additional functions and other libraries. To keep this PR manageable for review, only these functions and the two libraries are included so far.
Reason for opening in draft is that the functions private_to_public and ecdh are working correctly, but the functions sign_compact and sign_der have mismatched signatures between modules. I’m not sure if this is an actual bug or my implementation issue. I have checked that both functions follow the lower-S form and deterministic RFC6979. I want to know your thoughts before concluding anything.
Nice work! It's looking good.
I'm not sure if we should use the same Secp256k1 library that is inside in the Bitcoin module. Having a Git submodule would make it easy to update the secp256k1 library without affecting the bitcoin module. What do you think? @brunoerg
Reason for opening in draft is that the functions
private_to_publicandecdhare working correctly, but the functionssign_compactandsign_derhave mismatched signatures between modules. I’m not sure if this is an actual bug or my implementation issue. I have checked that both functions follow the lower-S form and deterministic RFC6979. I want to know your thoughts before concluding anything.
The issue is that when signing with decred with always normalize the signature to lower-S form. We should call the secp256k1_ecdsa_signature_normalize function in the secp256k1 module after signing to have the same behavior.
int create_signature_from_hash(const uint8_t *privkey, const uint8_t *hash, secp256k1_ecdsa_signature& signature)
{
int ret = 0;
ret = secp256k1_ec_seckey_verify(secp256k1_ctx, privkey);
ret = ret && secp256k1_ecdsa_sign(secp256k1_ctx, &signature, hash, privkey, nullptr, nullptr);
ret = ret && secp256k1_ecdsa_signature_normalize(secp256k1_ctx, &signature, &signature); // add this line
return ret;
}
Thanks for looking into this. Yes, we could do that and match the implementation of decred secp256k1 with libsecp256k1. However, the reason I did not add this in the first place is that I don’t see CLN using the normalize function at all, as seen here: https://github.com/ElementsProject/lightning/blob/534e3ae8f8694ff6ddf4083e9f43f9fabfa34668/bitcoin/signature.c#L97-L118. I also verified this on Bitcoin StackExchange: https://bitcoin.stackexchange.com/questions/129270/why-do-different-implementations-produce-different-der-signatures-for-the-same-p and it turns out there can be different valid signatures that are all accepted by the Bitcoin network. So, for this target, we can use the normalize function, but a very good target to have would be one that generates a signature using the private key and hash, then provides that signature to other implementations for verification using the public key and hash. I’m not sure(yet) how this could be implemented in bitcoinfuzz, but I think it would be a very useful target.
I'm not sure if we should use the same Secp256k1 library that is inside in the Bitcoin module. Having a Git submodule would make it easy to update the secp256k1 library without affecting the bitcoin module. What do you think? @brunoerg
I prefer to have a git submodule, better to manage at this point and we don't become dependant on Bitcoin Core. But maybe we could leave it as-is and update it to use submodule in a follow-up?
I prefer to have a git submodule, better to manage at this point and we don't become dependant on Bitcoin Core. But maybe we could leave it as-is and update it to use submodule in a follow-up?
There are only a couple of changes needed for this, so let’s include them in this PR itself
Thanks for looking into this. Yes, we could do that and match the implementation of decred secp256k1 with libsecp256k1. However, the reason I did not add this in the first place is that I don’t see CLN using the normalize function at all, as seen here: https://github.com/ElementsProject/lightning/blob/534e3ae8f8694ff6ddf4083e9f43f9fabfa34668/bitcoin/signature.c#L97-L118. I also verified this on Bitcoin StackExchange: https://bitcoin.stackexchange.com/questions/129270/why-do-different-implementations-produce-different-der-signatures-for-the-same-p and it turns out there can be different valid signatures that are all accepted by the Bitcoin network. So, for this target, we can use the normalize function, but a very good target to have would be one that generates a signature using the private key and hash, then provides that signature to other implementations for verification using the public key and hash. I’m not sure(yet) how this could be implemented in bitcoinfuzz, but I think it would be a very useful target.
If you examine the Core Lightning code closely, you will see that it creates a loop until a signature with a low S is generated. (Bitcoin Core also does this: https://github.com/bitcoin/bitcoin/blob/bf69442b3f5004dc3df5a1b1d752114ba68fa5f4/src/key.cpp#L221)
/* Grind for low R */
do {
ok = secp256k1_ecdsa_sign(secp256k1_ctx,
s,
h->sha.u.u8,
privkey->secret.data, NULL,
dev_no_signature_grind ? NULL
: extra_entropy);
((u32 *)extra_entropy)[0]++;
if (dev_no_signature_grind)
break;
} while (!sig_has_low_r(s));
Ultimately, all implementations using libsecp256k1 (https://github.com/bitcoin-core/secp256k1) must first normalize and then verify signatures.
At the decred secp256k1 implementation they mention that it's in accordance with the RFC6979 and BIP0062 (https://github.com/bitcoin/bips/blob/master/bip-0062.mediawiki#low-s-values-in-signatures):
// The produced signature is deterministic (same message and same
// key yield the same signature) and canonical in accordance with RFC6979 and
// BIP0062.
This means that the sign function will always be normalized, i.e., have a low-S signature. That's why normalization is necessary. I think it's fine, since they should all give the same result. Probably all the other secp256k1 libraries will have this functionality to test.
https://github.com/bitcoin/bitcoin/blob/4b24bfeab9d6732aae3e69efd33105792ef1198f/src/pubkey.cpp#L283
a very good target to have would be one that generates a signature using the private key and hash, then provides that signature to other implementations for verification using the public key and hash. I’m not sure(yet) how this could be implemented in bitcoinfuzz, but I think it would be a very useful target.
We could create a custom mutator that would allow us to do this.
~~Okay, suppose secp256k1 is unable to produce a signature at any point in the code. How about returning an empty string in all such cases? That way, we align with the two possible outcomes from Decred’s code:~~
- ~~If Decred returns an empty string ("") or a nil char pointer (which we ignore), everything stays consistent -> we either compare two empty results or simply skip the Decred one.~~
- ~~If Decred returns a non empty value, comparing it against the empty secp256k1 output will clearly fail, which is the expected behaviour. (assuming secp256k1 is unable to produce the signature, then Decred should also not produce a signature for it, hope this assumption is safe to rely on?)~~
~~I also realized this approach can be applied to all other cases as well (ECDH, PrivateToPublicKey), since Decred is very lenient in each of these scenarios.~~
~~WDYT?~~
EDIT: Never mind, it is mentioned in secp.PrivateKeyFromBytes that: This function primarily exists to provide a mechanism for converting serialized private keys that are already known to be good.
Sorry for not pointing this out earlier, but looking at this again:
The issue is that when signing with decred with always normalize the signature to lower-S form. We should call the
secp256k1_ecdsa_signature_normalizefunction in the secp256k1 module after signing to have the same behavior.int create_signature_from_hash(const uint8_t *privkey, const uint8_t *hash, secp256k1_ecdsa_signature& signature) { int ret = 0; ret = secp256k1_ec_seckey_verify(secp256k1_ctx, privkey); ret = ret && secp256k1_ecdsa_sign(secp256k1_ctx, &signature, hash, privkey, nullptr, nullptr); ret = ret && secp256k1_ecdsa_signature_normalize(secp256k1_ctx, &signature, &signature); // add this line return ret; }
secp256k1_ecdsa_signature_normalize returns 1 if sigin was not normalized, 0 if it already was (https://github.com/bitcoin-core/secp256k1/blob/e7f7083b530a55c83ce9089a7244d2d9d67ac8b2/include/secp256k1.h#L589)
So with this line:
ret = ret && secp256k1_ecdsa_signature_normalize(secp256k1_ctx, &signature, &signature); we are essentially skipping the case where the signature is already normalized
Another thing is (https://github.com/bitcoin-core/secp256k1/blob/e7f7083b530a55c83ce9089a7244d2d9d67ac8b2/include/secp256k1.h#L658-L661), That means we are currently skipping all valid signature test cases from secp256k1, If both implementations follow low-S normalization and RFC6979, the signatures should match every time, so this is likely a bug.
WDYT?
If you examine the Core Lightning code closely, you will see that it creates a loop until a signature with a low S is generated. (Bitcoin Core also does this: https://github.com/bitcoin/bitcoin/blob/bf69442b3f5004dc3df5a1b1d752114ba68fa5f4/src/key.cpp#L221)
/* Grind for low R */ do { ok = secp256k1_ecdsa_sign(secp256k1_ctx, s, h->sha.u.u8, privkey->secret.data, NULL, dev_no_signature_grind ? NULL : extra_entropy); ((u32 *)extra_entropy)[0]++; if (dev_no_signature_grind) break; } while (!sig_has_low_r(s));
Here it loops until it finds a low R and not low S. From what I found, this is done for size optimization, so that the tx size is reduced (and thus the tx fee).
Here it loops until it finds a low R and not low S. From what I found, this is done for size optimization, so that the tx size is reduced (and thus the tx fee).
True, my mistake. I read it quickly and completely forgot that it was talking about the R. 😅
secp256k1_ecdsa_signature_normalize returns 1 if sigin was not normalized, 0 if it already was (https://github.com/bitcoin-core/secp256k1/blob/e7f7083b530a55c83ce9089a7244d2d9d67ac8b2/include/secp256k1.h#L589) So with this line: ret = ret && secp256k1_ecdsa_signature_normalize(secp256k1_ctx, &signature, &signature); we are essentially skipping the case where the signature is already normalized
Another thing is (https://github.com/bitcoin-core/secp256k1/blob/e7f7083b530a55c83ce9089a7244d2d9d67ac8b2/include/secp256k1.h#L658-L661), That means we are currently skipping all valid signature test cases from secp256k1, If both implementations follow low-S normalization and RFC6979, the signatures should match every time, so this is likely a bug.
WDYT?
Thanks for looking into this more deeply. I should have done the same. As you said, if the signatures are different, it's a bug, so we don't need to call secp256k1_ecdsa_signature_normalize.
As you said, if the signatures are different, it's a bug, so we don't need to call
secp256k1_ecdsa_signature_normalize.
Okay, I’m not sure about the details or the exact reason for the difference, but should I open an issue in Decred to discuss this further? Or should we wait considering the discussion in (https://bitcoin.stackexchange.com/questions/129270/why-do-different-implementations-produce-different-der-signatures-for-the-same-p ) and only open an issue if there’s actually a problem during signature verification across implementations?
I verified (with a small set of inputs) that even different signatures are being verified correctly, so let’s drop the sign_compact and sign_der functions, and I will add a sign_verify function using a custom mutator.
Marking this PR as draft for now.
As you said, if the signatures are different, it's a bug, so we don't need to call
secp256k1_ecdsa_signature_normalize.Okay, I’m not sure about the details or the exact reason for the difference, but should I open an issue in Decred to discuss this further? Or should we wait considering the discussion in (https://bitcoin.stackexchange.com/questions/129270/why-do-different-implementations-produce-different-der-signatures-for-the-same-p ) and only open an issue if there’s actually a problem during signature verification across implementations?
I think it would be worth opening an issue to figure out what's going on.
It looks like Cryptofuzz has already found two issues similar to those we found in our fuzzing.
https://github.com/MozillaSecurity/cryptofuzz/blob/3d2377257129fc5da6effb92b0736e31db147dee/modules/decred/cryptofuzz.go#L238-L243
- They skip if the private key is zero. If the hash and private keys are zero, the sign function will enter an infinite loop.
- They skip if the private key is larger than the scalar 115792089237316195423570985008687907852837564279074904382605163141518161494337
I verified (with a small set of inputs) that even different signatures are being verified correctly, so let’s drop the
sign_compactandsign_derfunctions, and I will add asign_verifyfunction using a custom mutator.Marking this PR as draft for now.
I think this target is still worthwhile. The different signatures are a real bug. While that probably isn't dangerous, it means that one of the implementations isn't properly following RFC 6979 and low-S normalization.
Has this been addressed? https://github.com/decred/dcrd/issues/3580#issuecomment-3619932956
It looks like Cryptofuzz has already found two issues similar to those we found in our fuzzing.
https://github.com/MozillaSecurity/cryptofuzz/blob/3d2377257129fc5da6effb92b0736e31db147dee/modules/decred/cryptofuzz.go#L238-L243
1. They skip if the private key is zero. If the hash and private keys are zero, the sign function will enter an infinite loop. 2. They skip if the private key is larger than the scalar 115792089237316195423570985008687907852837564279074904382605163141518161494337
To provide some additional details here, these are both well documented requirements to use the PrivKeyFromBytes method which fuzzing code often does.
Its documentation explicitly states:
... WARNING: This means passing a slice with more than 32 bytes is truncated and that truncated value is reduced modulo N. Further, 0 is not a valid private key. It is up to the caller to provide a value in the appropriate range of [1, N-1]. Failure to do so will either result in an invalid private key or potentially weak private keys that have bias that could be exploited. This function primarily exists to provide a mechanism for converting serialized private keys that are already known to be good.
Typically callers should make use of GeneratePrivateKey or GeneratePrivateKeyFromRand when creating private keys since they properly handle generation of appropriate values.
So, they aren't issues, rather documented preconditions and Cryptofuzz is doing the correct thing by ensuring the preconditions are satisfied.
... The different signatures are a real bug. While that probably isn't dangerous, it means that one of the implementations isn't properly following RFC 6979 and low-S normalization.
Already discussed in the issue NishanBansal2003 opened, but for the sake of having the referenced information all in the same place, this isn't a bug and it's not due to low-S normalization.
It is because RFC6979 does not guarantee identical signatures between implementations because it explicitly permits the use of different variants in section 3.6 and those variants will result in different signatures. So, it only guarantees deterministic signatures given the same set of inputs with the same implementation (or, more precisely, implementations using the exact same variants).
In other words, two different implementations producing different, but still independently deterministic, signatures is not a bug or an issue in terms of security.
The specific difference that is being observed here is that both implementations use different variants the specification explicitly calls out as valid. In particular, Decred's code uses this variant in section 3.6:
- It is possible to use H(m) directly, instead of bits2octets(H(m)), as part of the HMAC input. As explained in Section 3.5, we use bits2octets(H(m)) in order to ease integration into systems that already use an (EC)DSA signature engine by sending it an already- truncated hash value. Using the whole H(m) does not introduce any vulnerability.
Emphasis mine.
On the other hand, libsecp256k1 uses the bit2octets(H(m)) variant.
In other words, libsecp256k1 reduces the hash modulo the secp256k1 curve order prior to feeding it into the HMAC whereas Decred's secp256k1 does not.
In short, both implementations are compliant with the spec and both are producing cryptographically valid signatures.
Given all of that, along with the fact that ECDSA requires the hash to be reduced modulo the secp256k1 curve order already prior to signing, the deterministic signatures can be made to match by ensuring they both use the same variant for the deterministic nonce generation which can be achieved by normalizing the hash by reducing it modulo the secp256k1 curve order prior to calling Sign.
Let's satisfy the precondition of not providing an invalid private key 0.
func isAllZeros(b []byte) bool { for _, v := range b { if v != 0 { return false } } return true }
I don’t think this is required, since we are already doing this:
if (!secp256k1_ec_seckey_verify(secp256k1_ctx, privkey))
{
return std::nullopt;
}
With this, we skip comparing cases where the private key is invalid
I don’t think this is required, since we are already doing this:
if (!secp256k1_ec_seckey_verify(secp256k1_ctx, privkey)) { return std::nullopt; }With this, we skip comparing cases where the private key is invalid
The main reason is that we should use the Decred secp256k1 library correctly. An infinite loop occurs when the library tries to sign with a zero private key and a zero hash. If we don't follow the preconditions, we'll encounter fake bugs.
In general, I'd suggest something like the following wrapper code any time you are trying to use raw bytes that are not already known to be a valid private key, which obviously applies to the case with fuzzing input. That way, all instances that are attempting to convert arbitrary bytes can use the same func and be assured they're always dealing with a valid private key.
func ParsePrivKey(privKeyBytes []byte) (*secp256k1.PrivateKey, bool) {
if len(privKeyBytes) > 32 {
return nil, false
}
var key secp256k1.ModNScalar
overflows := key.SetByteSlice(privKeyBytes)
if overflows || key.IsZero() {
return nil, false
}
return secp256k1.NewPrivateKey(&key), true
}
....
privKeyBytes := C.GoBytes(unsafe.Pointer(privKeyData.data), privKeyData.length)
privKey, valid := ParsePrivKey(privKeyBytes)
if !valid {
return nil
}
EDIT: As a bit of an aside, I think it is probably be worthwhile to consider changing the API in the Decred Go lib in a future version to provide a constant time implementation for working directly with raw bytes which avoids the potential footguns altogether by refusing to create a private key when the raw bytes are invalid rather than solely documenting the preconditions and putting the onus on the caller.
Needs rebase. I hope this is the last one, will do a final review here, sorry for the noise.