secp256k1-zkp
secp256k1-zkp copied to clipboard
Add dedicated methods for verifying asset/value proofs
- [x] implement value proof verification
- [x] implement asset proof verification
- [x] add more tests
utack 9eea0819bd08e1f98cfcb932728d6bbb5290922d
I don't have time at this stage to test this; wally uses the full proof validation code for blind asset/value proofs since we use that code already for blinding. Once PSBTv2 is in I should be able to switch over and make sure our tests still pass, may be a while though.
From an API perspective, it may be nice to have create_value variants for creating single value proofs that take the reduced set of parameters and call the full functions with the repeated/defaulted values (even if its not worth a custom implementation of the actual proof creation).
Agreed, create_ variants would be nice. I'll add those to this PR.
I have added a rangeproof_create_value function to create a single-value rangeproof. I don't think there's much value in doing the same for surjection proofs -- to create a single-value surjection proof you just use the normal surjectionproof in the "obvious" way by providing a single input and setting n_ephemeral_input_tags to 1. No magic or unused parameters.
The memory usage of surjectionproof_generate is pretty awful but not nearly as bad as rangeproof_sign -- if this is a problem then I can create a dedicated function. You can also compile with --enable-reduced-surjection-proof-size which will reduce the memory usage, at the expense of no longer being able to validate proofs for txes with more than 16 inputs.
@apoelstra rangeproof was the painful case for me, so thanks for that, LGTM.
wally (following the Elements impl) only creates surjection proofs with a maximum of 3 used inputs, so for generation even the reduced mode has 12 more unused stack elements than we need. However, we don't enable reduced mode so that we can verify proofs created by others since wally is available for use as a general purpose lib. In an ideal world we'd be able to specify the number we use instead of having a fixed limit, and even better be able to generate with one limit and verify with another. I can see Jade/bitbox using 3/16 for create/verify for example assuming they can both verify on device.
but not nearly as bad as rangeproof_sign
Note for secp256k1_rangeproof_sign, the final memset(prep, 0, 4096); is redundant, having been zeroed above. If the compiler doesn't eliminate that as a dead store that would keep prep alive for the whole function which would prevent some other arrays like pubs from reusing its space.
You could use a narrower unsigned type for rsizes and secidx to trivially shave some stack space in the general case, otherwise I suspect some judicious use of restrict would be needed to allow the compiler to optimally overlay the used stack items.
@jgriffiths i actually have a massive branch (which will need to be rebased now since i reused bits and pieces for this) whose goal is to dramatically reduce the stack space of the rangeproof code. Unfortunately it required a near rewrite of all the logic, pulling the borromean sig stuff out of its own functions and inlining it in the rangeproof/surjectionproof code.
Essentially the way it works is by directly using the proof buffer to store data so that we don't need all these arrays of intermediate values.
I'm quite proud of it but I don't think we have the review bandwidth to do anything with it :/ and I'm not sure that I finished it.
https://github.com/ElementsProject/secp256k1-zkp/pull/160 is the start of it
I don't think we have the review bandwidth to do anything with it
The eternal problem lol.
At this stage things are fine for us; its only if the hardware wallets have issues that we may need to look at lowering resource usage. That time is creeping forward for Jade at least, since the qrcode handling has been improved to the point where we should be able to take psbts from the camera to sign. But until we are ready to resource that we can't commit anyone to review or test either.
In the meantime though I'd remove the redundant memset just because its, well, redundant :)
since the qrcode handling has been improved to the point where we should be able to take psbts from the camera to sign.
Have you looked at how we do animated QR PSBTs? A number of both hardware & software bitcoin wallets and services now support it. It is quite mature, has been ported to multiple languages, and has other advantages.
https://github.com/BlockchainCommons/Research/blob/master/papers/bcr-2020-006-urtypes.md#partially-signed-bitcoin-transaction-psbt-crypto-psbt
Hi @ChristopherA - yes, I believe we are targeting that format, however this PR and repo aren't the place to discuss it. Please follow https://github.com/Blockstream/Jade where the PR adding support will land after internal review, or raise an issue there if you have questions/comments, thanks.
In the meantime though I'd remove the redundant memset just because its, well, redundant :)
edit: Oh, sorry, I see you're saying it's really redundant because there's another memset that zeroes the array already... Yes, sure, let's remove it...
old: Please leave the memset in, it's there for a reason. It's to remove secrets from the stack. Now of course everyone knows that this won't work because the compiler will remove dead stores. But for some reasons we have the plain memsets in the code because this is how the code evolved. There's an old PR upstream that replaces all the memsets by proper methods to overwrite secret data. (Yeah I should rebase this...) Once this lands in upstream, we should do the same changes here.
So in sense, the dead memsets in the code base currently act as a marker for where we should do proper overwriting in the future, and so we should keep them.
I don't know how other reviewers feel but I think I'd need some refresher on how the surjection proofs work before I'm a position to review this. Or maybe @apoelstra can walk one of us through the changes.
@real-or-random the surjection proofs work by taking a single output commitment, a variable number of input commitments, and then doing a ring signature over (output - input_i) for all i. The premise is that the user will only be able to construct such a ring signature if the output is merely a reblinding of one of the inputs.
Everything else in this code is just mechanical stuff to make the commitment structure and proof serialization match the deployed surjection proofs in Elements.
Oh -- and to add, a "ring signature" is just a Schnorr signature in both cases, since we're proving a single value so there is only one key to check.
@real-or-random I'm going to make another PR on top of this which pulls the asset ID/entropy logic from Elements Core into secp-zkp, so the sooner we can merge this the simpler my life will be.
Actually, scrap that. On further inspection I think the API separation is already fine.
Ok I'll try to have a look tomorrow. :)
@real-or-random I have pushed fixup commits which address all your nits except the code-sharing ones. If you'd like I can refactor the code, but I don't think the savings would be super big and it'd undermine the "make these functions as self-contained as possible" goal of writing this.
Just for my the understanding: The benefit here is simply efficiency, right?
In general, I think we could add some more tests that test consistency with the ordinary functions, maybe also with bitflips (because we have many and "bitmaps"/"flags").
You didn't answer these. I guess then an another goal is to have a simple readable "reference-like" implementation?
And for the tests: I still think that consistency tests could be useful. (Not sure about the bitflips anymore, I think you've convinced me that we know what we're doing with the header bits and "used input" bitmap.)
- Regarding the purpose of these functions, there are two goals: to be a reference implementation for somebody who is working on implementing them on a Ledger with bare crypto primitives; and efficiency.
- Regarding the comment, revisiting this I think the "how to create these" comment does make sense on the
verifyfunction and less so on thecreatefunction.
I'll add some more consistency tests, and a test for the size thing.
One last thing -- @real-or-random should I rename these functions from create_value and verify_value to create_single and verify_single?
One last thing -- @real-or-random should I rename these functions from
create_valueandverify_valuetocreate_singleandverify_single?
Ah yeah, I haven't really thought about the names. Now that you mention it, I think create_single and verify_single are slightly better. I'd suggest create_exact and verify_exact. This is even more precise if you ask me.
edit: Oh and I wonder why the tests now fail.
@real-or-random I think one of the API tests tripped the "providing plen between 65 and 73 doesn't work" bug, and I was asserting that it failed (copy/paste error because almost all the API tests end in == 0..). So when I fixed it it broke the API test.
I'll start running git rebase -x 'make && ./tests 1' before pushing from now on.
...ok, I might not be able to fix this today, but there's another bug in create_value where I use &proof[offset] before this value is filled in ... the fact that the tests were passing before seems to be a very surprising coincidence related to my calling create_value twice in a row with the same parameters before calling verify_value.
The fix is simple enough -- re-order the operations in create_value. But it may involve rewriting much of the function because I reused so many variables to save stack space.i
edit flight delayed, got it. I did indeed have to rewrite create_value, sorry. running through tests again and then will push
Ok @real-or-random one more question ... should you be able to "rewind" single-value proofs? I'm going to vote no, it will make my life much harder (I will need to recreate the insane hmac-sha2-based RNG and run a ton of crap through it) and I don't see any real value to it.
Ok, this one should work :sweat_smile:
Fixups look good except for one point below. Can you squash?
I liked the previous method of computing k more. If you derive k from pp_comm, it's much easier to see that reuse of k is excluded. And indeed, at the moment, it's somewhat strange: A prover who knows the dlog between the two generators (it's not absurd that this form of "trapdoor commitment" is useful in some protocol), would be able to generate inputs that lead to reuse of k.
Ok @real-or-random one more question ... should you be able to "rewind" single-value proofs? I'm going to vote no, it will make my life much harder (I will need to recreate the insane hmac-sha2-based RNG and run a ton of crap through it) and I don't see any real value to it.
Yep, if you don't think we need them. (I don't know, I don't use these methods :D).
Rebase/squash only. Will add one more fixup which changes how k is computed.
Ok, rebased and added one more commit with @real-or-random's suggestion to compute k differently. I can squash that commit if you'd like though I don't really think it's necessary.