Implement P256 verification via RIP-7212 precompile with Solidity fallback
Fixes LIB-1225
Why do we care about secp256r1?
Most security application uses secp256r1 (also known as p256). This lead hardware manufacturers to implement it, and leave other “exotic” curves on the side. Today, billions of people around the world own devices with spetial security hardware that supports secp256r1. If that was the curve used by ethereum, all these people would basically already own a hardware wallet … but unfortunatelly that is not the case.
If we cannot easily modify the curves supported by major smartphones manufacturer, we can provide tools to verify secp256r1 curve onchain. This would allow control of ERC-4337 smart wallets (among others) through a device designed to handle security keys (something users are notoriously bad at).
What @openzeppelin/contracts could provide
Existing wallets provide mechanisms to produce secp256k1 signature, both for transactions and messages. Solidity provides a precompile that, given a hash and a signature, will recover the address of the signer (using secp256k1). No such precompile exist for secp256r1.
There exist solidity implementations of the secp256r1 “verification” workflow. There is also a proposal to provide that verification through a precompile. Even if the precompile is implemented, it is likelly that many chains will not upgrade soon. A solidity implementation would remain usefull for users on these chains.
In some cases, users may want to follow the “recovery” flow that they are familiar with. There is also no proposal for a precompile that would do that operation. A solidity implementation would possibly be usefull to many users, and remain uncontested in the near future.
Notes
Stack too depth
Current proposed implementation works well when turning optimization on. However, compilation fails with "stack to deep" if optimizations are NOT turned on. This PR does enable optimizations for all tests to circumvent this issue. Also, users will have to enable optimizations if they want to use this library, which they should definitelly do given the gast costs.
- [x] This is still an issue when running coverage :/
- This was fixed by adding
details: { yul: true },to the optimizer settings. This change in optimization setup may affect the accuracy of gas reporting in this PR (reference doesn't use the same settings)
- This was fixed by adding
Benchmarking
This repo provides benchmarking of this implementation against other existing ones.
PR Checklist
- [x] Tests
- [x] Documentation
- [x] Changeset entry (run
npx changeset add)
🦋 Changeset detected
Latest commit: 5314727fadea9c507b952c138c438cde7fa5f9c7
The changes in this PR will be included in the next version bump.
This PR includes changesets to release 1 package
| Name | Type |
|---|---|
| openzeppelin-solidity | Minor |
Not sure what this means? Click here to learn what changesets are.
Click here if you're a maintainer who wants to add another changeset to this PR
Relevant source for discussion: https://www.hyperelliptic.org/EFD/g1p/auto-shortw-jacobian.html
IMO the API should be verify and verifyNative (renamed from verify7212). tryVerify7212 should be private, verifyNative assumes that the precompile is available. verifySolidity should be private too, though it may be needed internally for testing...
The API functions should also include malleability checks. Optionally there could be separate functions without malleability checks (one for verify and one for verifyNative?). This is what EIP-7212 recommends:
Wrapper libraries SHOULD add a malleability check by default, with functions wrapping the raw precompile call (exact NIST FIPS 186-5 spec, without malleability check) clearly identified. For example, P256.verifySignature and P256.verifySignatureWithoutMalleabilityCheck. Adding the malleability check is straightforward and costs minimal gas.
IMO the API should be
verifyandverifyNative(renamed fromverify7212).tryVerify7212should be private,verifyNativeassumes that the precompile is available.verifySolidityshould be private too, though it may be needed internally for testing though...The API functions should also include malleability checks. Optionally there could separate functions without malleability checks (one for
verifyand one forverifyNative?). This is what EIP-7212 recommends:Wrapper libraries SHOULD add a malleability check by default, with functions wrapping the raw precompile call (exact NIST FIPS 186-5 spec, without malleability check) clearly identified. For example, P256.verifySignature and P256.verifySignatureWithoutMalleabilityCheck. Adding the malleability check is straightforward and costs minimal gas.
++ on all points -- adding malleability check in the "default" function also aligns with the other utils like ECDSA and RSA in terms of security checks
verifySolidity should be private too, though it may be needed internally for testing though
I understand that ideally the Solidity version won't be used if EIP7212 is shipped to every network, but I guess is not the case and perhaps some networks may never get the precompile. Think about networks running in the OP Stack or something similar, perhaps it's easier for them to run their rollup with a Solidity implementation that waiting for a generalized release.
I'm not sure how upgrades to this chains work but the point goes along. I would be in favor of keeping it so I left it internal for now.
Optionally there could separate functions without malleability checks (one for verify and one for verifyNative?).
Added them as private but I know @Amxx was strongly in favor of keeping the malleability by default because the tooling produces values on both sides of the curve. I updated the tests accordingly to ensure a lower-order s value, I think that the tooling produces both values is easily solvable by the Dapp developer.
Currently the interface is left with:
- verify
- verifyNative
- verifySolidity
- recovery
- isOnCurve
If we expose the malleable ones, I would be in favor only of verifyMalleable and also a checkLowerS function as we discussed @Amxx. I tried implementing it now but it's unclear whether the checkLowerS should revert or just return false. I didn't added it yet because I'm not sure if it's useful in the end.
I think fuzz tests are failing due to the cases I shared with @cairoeth. The Hardhat tests are failing because there's a case using r as s, which may be in the lower order.
Think about networks running in the OP Stack or something similar, perhaps it's easier for them to run their rollup with a Solidity implementation that waiting for a generalized release.
Right but I'm saying they should use verify, i.e. native with fallback to Solidity. I'm thinking using the Solidity version directly should be discouraged? The only reason I can think one might want to use the Solidity version directly is if the precompile is there but it's somehow not compliant.
It would be fine to make it internal, my main point is that its use should be discouraged. So as a consequence for example it'd be ok to only have the raw variant without malleability check.
Comming back from my weekend, I see a lot of changes, so I have questions
- what is the point of having both
verifyand_verifyMalleableif the_verifyMalleableis private and only ever called byverify? - Same question for
verifyNativeand_verifyNativeMalleable? - Why is
verifySolidityaccepting maleable, but not named accordingly. Not that it is internal, so accessible. - Do we expect all signers/app to implement
ensureLowerOrderS? The ethereum wallets that produce secp256k1 signature all know to use a lower S value, but other cryptographic tools do not. Noble does not. I'm unsure, but I wouldn't be surprised if smartphone enclaves (our main target) do not either. Do we want to force apps into modifying the signatures produces by the signers?
Last week we agreed to have the function malleable by default, and having an helper (that was opt-in) so devs can easily check the S value. Now something completelly different is implemented...
Note, I'm in vavor of:
- having the least opinionated library
- providing as much option to the devs
- having as little exposed functions as possible.
IMO, this is achieved with:
verify: malleable, genericverifyNative: malleable, reduces deployment size when precompile is known to be presentrecovery: malleablecheckSignatureMalleability: opt-in when extra protection is required
@Amxx re: malleable by default
This is discouraged by EIP-7212:
Wrapper libraries SHOULD add a malleability check by default, with functions wrapping the raw precompile call (exact NIST FIPS 186-5 spec, without malleability check) clearly identified. For example, P256.verifySignature and P256.verifySignatureWithoutMalleabilityCheck. Adding the malleability check is straightforward and costs minimal gas.
Comming back from my weekend, I see a lot of changes, so I have questions
I'll answer below, note that it wasn't meant to be final, I pushed what I thought was agreed from everyone's opinion but it's still open to discussion to consider yours 🙌🏻
- what is the point of having both
verifyand_verifyMalleableif the_verifyMalleableis private and only ever called byverify?- Same question for
verifyNativeand_verifyNativeMalleable?
To discuss whether it makes sense to inline it or not. Given that you favored keeping a malleable version, I thought it was easier to make it internal and remove the underscore if we agreed. Not a fan of that though.
- Why is
verifySolidityaccepting maleable, but not named accordingly. Not that it is internal, so accessible.
Considering @frangio's opinion I got the sense it should be private but I waited to hear your opinion. Please don't consider it decided yet
- Do we expect all signers/app to implement
ensureLowerOrderS? The ethereum wallets that produce secp256k1 signature all know to use a lower S value, but other cryptographic tools do not. Noble does not. I'm unsure, but I wouldn't be surprised if smartphone enclaves (our main target) do not either. Do we want to force apps into modifying the signatures produces by the signers?
I'm not surprised that Ethereum wallets started producing all signatures in the lower S value. When using signatures in a context that's not specific to smart contracts it may be fine to allow for malleability, it's just that we care about replayability and double spending in smart contracts.
I agree with you. Apps will be forced to modify the signatures produced by their signers but the workaround is fairly easy and perhaps is the most responsible way to design a wallet (a wallet that allows malleability would be risky!). I would argue this is probably why Ethereum wallets produce secp256k1 signatures in the lower S in the first place.
Last week we agreed to have the function malleable by default, and having an helper (that was opt-in) so devs can easily check the S value. Now something completelly different is implemented...
We can consider the helper, but I feel the overall consensus is to disallow malleability by default.
Note, I'm in vavor of:
- having the least opinionated library
- providing as much option to the devs
- having as little exposed functions as possible.
So far I agree on providing flexibility to the devs, but the goal is to do it in the most secure way. If we don't find strong reasons to allow for malleability, then I wouldn't justify the risk of leaving a footgun in the library for the sake of tooling compatibility.
In my opinion, the tradeoff here is between broader compatibility and default security. When comparing the two I can see that:
- The workaround is an easy function to invert the signature to the lower S order. We can even commit to contributing to these libraries to mitigate the concern.
- The EIP-7212 already suggests wrapping the precompile.
My conclusion is that preserving broader compatibility puts our users at risk for a (yet unclear) use case, while disallowing malleability may be a pain for libraries dealing with P256 signatures but we can specify guidelines for adapting signatures produced in the higher order.
Let's keep it how it is right now then (inlined malleability protection in tryNative and solidity, everything else gets it indirectly) + fewer internal functions
in accordance to secp256r1 spec, fixed a bug with https://github.com/OpenZeppelin/openzeppelin-contracts/pull/4881/commits/d8f4f7eb52468d60404627410980f96975ac9243 for when n < p and x % n == r but x != r, so some signatures that should be accepted were not.
I didn't see the issue with the test vectors in Foundry @Amxx, any reasons to remove them?
I didn't see the issue with the test vectors in Foundry @Amxx, any reasons to remove them?
Our guideline has alway been to do unit tests in hardhat, and fuzzing in foundry. So instead of using a .jsonl that was processed by someone else, and reading it in solidity, and doing that unit testing in foundry, ... I decided to get the original json file, unmodified, and processing it in javascript like we do for RSA.
I understand the logic for 9b240146a72383a551c2451c1854ed9c176246d5, but we surelly can do better.
EDIT: actually, everytime we do a _jAdd, one of the point is already in memory, so we can just pass the memory object and load it at the last moment. See ecd3aa2332591300eaf3acf7564ad2f2a958125d. IMO that is WAY cleaner (and is doesn't need storing to the scratch space).
Also, I re-applied the changes to hardhat.config.json, which are IMO a significant improvement to the clarity ofd the config, and which do not enable IR by default.
Re: Transaction reverted: trying to deploy a contract whose code is too large
I think allowUnlimitedContractSize can be safely set to true always.
If a non-exposed contract is too large it should trigger the compiler warning and fail CI anyway:
https://github.com/OpenZeppelin/openzeppelin-contracts/blob/c3f8b760ad9d0431f33e8303658ccbdcc1610f24/hardhat.config.js#L91-L97
I understand the logic for https://github.com/OpenZeppelin/openzeppelin-contracts/commit/9b240146a72383a551c2451c1854ed9c176246d5, but honestly that is an abomination. We surelly can do better.
I'd like to hear why this is a problem if using scratch space is safe. Regardless of how hacky it is I do think it's worse that we didn't discuss the config and is again there. I would appreciate if you communicate better your decisions. It's pretty much impossible to follow your preferences here!
It turns out that after #5098, forge coverage hits the Stack too deep error again although tests compile without --via-ir.
We can use --ir-minimum in scripts/checks/coverage.sh and should work. I'll try it out. See https://github.com/foundry-rs/foundry/issues/3357