taproot-assets
taproot-assets copied to clipboard
addr: create new multi-addr scheme to support receiving fungible assets with a group key
Today addresses contain the asset ID that the party wants to receive. This works mostly, but once you want to receive one of many possible tranches of a fungible asset with a group key, things break down a bit.
Group Asset ID Multi-Addrs
Consider that an asset group G, may have distinct asset IDs {A, B, C}. The way addresses work today, the receiving party explicitly says that want asset A, as they need this to be able to look for the asset anchoring output on chain. If the receiver doesn't know what the sender will send ahead of time, then this falls short.
One solution to the above is that the address lists a set of asset IDs (in addition to the main group key). With this, they can watch for all 3 potential versions on chain. This works for simple cases where the sender has an asset UTXO that precisely sized to pay K units to the sender. However, things break down once again when you factor in the scenario where the sender has some/all of the asset IDs, and can solve the subset sum problem to send a total amount (eg: 3 units of A, 2 of B, 1 of C).
Unknown Asset Group Amount Distribution
The core issue with the above is that the receiver doesn't know the amount distribution that'll be sent to satisfy the spend. If both sides are communicating, then this isn't as issue as the vPSBT flow can be used.
Late Binding of Split Amounts
In the past, we made some changes to the way splits work to make them less tightly bound. Eg: the split asset has an all zero prev ID, as the receiver doesn't know what inputs will actually be spent.
I think we can extend that further for this special address type, by allowing the amount to be bound by the split inclusion proof itself. In this case, the amount of the leaf on chain will be zero. The asset inserted into the split commitment tree will instead carry the value distribution (it must by definition to prevent asset inflation). When validating the split inclusion proof this value will be assumed, and when spending the asset this is the value that'll be inserted into the input sum tree.
The main quirk of this is that if sending all 3 asset IDs mentioned above, then 3 outputs are required: the receiver is looking for all 3, then knows how to assemble the split proof needed to send them. It's possible to only have a single top-level output be created on chain. In this case, the receiver also looks for the combinations of the asset, and makes a new tree composed of those.
Blue sky thinking, but, would the idea I've outlined below work? I'm sure to learn something if I'm way off. It sorts of builds on what we already have in place.
Send process creates two taproot outputs:
-
The existing style taproot output containing a tap commitment. But the commitment may contain assets from multiple different asset IDs (or maybe not, we may need one output per asset ID instead). This output is not discoverable from (or directly associated with) a tap address. (Also includes an index to the second output?)
-
A new taproot output that's group key specific and associated with a given group key tap address. This output contains (in tapscript) the group key and the amount of that asset group sent as a total. The total amount matches that found in the first tap output commitment tree. This output can be identified and matched to a tap address based on tap output key. (Also includes index to output 1 (above)? Still discoverable from address via iteration).
Then the receiving node knows to start looking for a proof having identified output 2. And the proof interprets output 1.
This way we could have tap addresses stipulating group key (or even group keys). And the sender could send whatever asset IDs that satisfy.
The new output triggers proof lookup and is more general than the asset containing output. So the tap address can be generalised also (group key(s) not asset ID).
Proof courier proof lookup might need to be generalised. Lookup all asset IDs for a given outpoint and group key. Etc.
Will outline the design changes I've been sitting on before replying to the comments upthread.
I think we can solve this without changing any protocol definitions of proofs, but rather just changing the order-of-operations on the receiver side.
Problem Statement
The core issue is that, for non-interactive asset transfers, the sender and receiver must be able to compute some shared reference that can used with a proof courier, in order to transfer the needed transfer proofs.
For the hashmail proof courier, we use H(scriptKey), while the universe courier uses the UniverseKey directly, which is similar to asset.TapCommitmentKey() || outpoint || scriptKey.
In the second case, the outpoint is only learned by the receiver after the transfer TX is confirmed, as they are watching for a TX that includes the output they precomputed from an address.
Right now, the receiver computes an outpoint from an address, and waits for confirmation of a TX containing that output. Once they have confirmation, they query the configured proof courier for the transfer proof and verify the asset transfer. However, we can reorder these events without compromising on transfer security properties.
Specifically, a valid transfer requires both a valid transfer proof and a confirmed TX that anchors the Tap trees specified in the proof.
Alternate Implementations
Consider a different order of operations:
- The receiver creates an address for receiving grouped assets and sends this to the asset sender.
- The receiver computes an identifier from this address. (Discussed below). They start a background loop that polls the proof courier on each new block for a transfer proof matching that identifier.
- The sender performs the transfer, broadcasts the transfer TX, and uploads the transfer proof.
- Once the transfer TX confirms, the receiver polls the proof courier and fetches the transfer proof.
- The receiver validates the transfer proof, including the SPV proof for the transfer TX. If the complete proof is validated successfully, they consider the transfer complete.
The change here is that the proof already commits to the transfer TX, so we don't actually need to be bound to or scan for a specific output. We can trigger proof receipt on each new block, and validate the transfer TX as part of proof validation. This relaxes the requirements of the mapping from an address to a unique identifier.
Alternate Unique Proof Identifiers
A possible identifier could be H(tapscriptRoot || asset.TapCommitmentKey() || chain_params || asset_version || internal_key || script_key || amount || courier_addr).
Note that this identifier omits both the outpoint and the asset.AssetCommitmentKey, and the asset ID.
The asset ID or groupKey are committed via the TapCommitmentKey.
If we omit the specific branches of the asset commitment that fulfill the request, then we can use any blend of assets in a group to pay to the address.
In plain English, "pay this amount, with these keys, of an asset specified by this asset.TapCommitmentKey, and push the proofs to this courier."
With such an identifier, we may still want to allow the receiver to specify subsets of an asset group they are willing to accept. I think an allow/deny list of assetIDs should work for that, e.x. "only send me trading cards from the first 5 issuances", "send me anything this universe recognizes as issued", etc.
Privacy Implications
I don't think there is a significant change in user privacy with this model.
Right now asset receivers using either built-in courier leak timing information about the transfer, and the proof is sent unencrypted. Couriers could also identify a specific transfer if they are aware of the related address. The proposed model trades timing information for increased courier load from polling, which could be addressed with client-side jitter on when they fetch proofs.
Implementation Considerations
I think this would be a significant rework of the custodian and proof courier subsystems. Mostly in using changing the address format, adding logic to compute a new identifier from an address, and wiring that though multiple subsystems and state machines. We would also need to change some subsystems to adding polling on each block that's not tied to a TX confirmation (so proof fetch instead of a TX confirmation callback). I think overall the most tricky 'business' logic for handling transfer receipt would not need to be changed, outside of using a new identifier. I think the DB changes involved should be minimal, as we should be able to map from the new identifier to the existing identifiers used for DB operations.
Nice write up @jharveyb ! I thought I'll include here the question I asked via DM.
- The receiver computes an identifier from this address. (Discussed below). They start a background loop that polls the proof courier on each new block for a transfer proof matching that identifier.
Will the proof courier need to be polled for every single block? And for the lifetime of the address?
You mentioned that that will be the case via DM.
I think that means that the address is then intended for single use only. Otherwise every tap node will have to query their proof courier for each of their generated tap addresses and for each block. Or they risk missing out on receiving assets.
I wander if a type 2 output like I mentioned here might let us re-use addresses. As a receiving node will only need to query a proof courier having identified a generalised address specific taproot output (a bit like we currently trigger proof lookup from a recognised taproot output key). It might also provide compensation for the subsequent courier proof lookup (if that ever became necessary).
- A new taproot output that's group key specific and associated with a given group key tap address. This output contains (in tapscript) the group key and the amount of that asset group sent as a total. The total amount matches that found in the first tap output commitment tree. This output can be identified and matched to a tap address based on tap output key. (Also includes index to output 1 (above)? Still discoverable from address via iteration).
IIUC, you're suggesting that this second output doesn't use the root of a Taproot Assets tree as a leaf in its Tapscript tree, but instead uses the hash of a different type of unique identifier, something like H(asset.TapCommitmentKey() || scriptKey || amount), which would allow the receiver to still watch the chain for specific UTXOs and then scan the other outputs of those TXs for proofs?
Will the proof courier need to be polled for every single block? And for the lifetime of the address?
Good point, I didn't think about address lifetimes originally. With your design I think we'd trade that polling cost with bigger on-chain TXs for sends to addresses with a group key. However, the original solution from the first comment in this thread also required multiple UTXOs, so maybe that is an acceptable tradeoff.
IIUC, you're suggesting that this second output doesn't use the root of a Taproot Assets tree as a leaf in its Tapscript tree, but instead uses the hash of a different type of unique identifier, something like
H(asset.TapCommitmentKey() || scriptKey || amount), which would allow the receiver to still watch the chain for specific UTXOs and then scan the other outputs of those TXs for proofs?
Yes, that's basically the idea.
But I wander if we could take it further and drop the amount field from the unique identifier. It would be nice to have addresses that are flexible enough to receive any amounts.
And perhaps it would be possible to include multiple tap commitment keys in the unique identifier, to correspond with a multi group key tap address. User case would be: "I'm willing to accept any asset subset from these groups."
Then, we'd have an address that can receive any amount of any asset (or combination of assets) restricted to an asset group key set.
(This leaves room for "invoices" as distinct to addresses.)
Great inputs, both of you!
To summarize (and to add my own thoughts), it sounds to me like we have four options so far:
-
Increase the on-chain footprint of our non-interactive sends (@ffranr's idea)
- Given the implications to costs (increased fees due to more outputs, and the "dummy" output would also need to have a non-dust output value) and privacy (makes it easier to detect TAP sends because there would always be at least 3 p2tr outputs), I'm not sure we'd want to go down that route. Since the idea of TAP is to be as off-chain as possible.
-
Reverse the order of operation, allow polling with different identifier (@jharveyb's idea)
- This would require changes both in the universe and our send logic and would put more strain on both sides (requiring all receivers to make at least one request every 10 minutes). But it keeps everything off-chain, which is nice.
- I think we'd also want to introduce the concept of expiring an address, so we know when we can stop polling.
-
Combination of late binding and splitting outputs on chain (@Roasbeef original proposal above)
- Requires changes to the BIP and also increases the on-chain footprint.
-
Find another way to transmit asset IDs + amounts to receiver (my own brainstorming)
- What we are missing on the receiver side to reconstruct the tree is the combinations of asset IDs and amounts of the individual pieces being sent. If we have that, we know what leaves to create in the asset-level tree and can reconstruct the full TAP tree.
- If we find a way to transmit this information from sender to receiver in an independent way, then the rest of the existing flow could be used. If we borrow the idea of late amount binding from the original proposal, then we only have to transmit the asset IDs chosen, which we could maybe compress into a single 32-byte value (bloom filter or other approach that compresses multiple IDs into a single value that is still reasonably easy to deconstruct into individual values if the set of all possible values is known)
[ JHB Editor's Note: Adding a 5th option that postdates this original comment ]
-
Upload the assetID + amount list (encrypted) to the proof courier, and have the receiver fetch this to compute the on-chain output.
- The receiver has two extra round-trips with the proof courier to decrypt the necessary data, and the rest of the receive flow stays the same. This would require some changes for the proof courier to support this new type in addition to issuance/transfer proofs.
* I think we'd also want to introduce the concept of expiring an address, so we know when we can stop polling.
The more I think about it the less I like expiry, especially with high fees - how long do I keep scanning after the expiry date until I'm sure that I received all transfers intended for that address? Until fees get back to 1 sat/vB, or do I just care that someone starts a transfer before that date, even if it clears in 3 days?
Maybe that's not too different from issues we would have now with high fees though.
4. Find another way to transmit asset IDs + amounts to receiver (my own brainstorming) * What we are missing on the receiver side to reconstruct the tree is the combinations of asset IDs and amounts of the individual pieces being sent. If we have that, we know what leaves to create in the asset-level tree and can reconstruct the full TAP tree.
So another half-round of communication, this time from sender to receiver? Maybe the tools being discussed in https://github.com/lightninglabs/taproot-assets/issues/478 would be helpful in coordinating that (or the unique identifier scheme I outlined above).
And after sending that data, the receiver computes the matching on-chain output, adds it to their watchlist, and continues the receive process as we have implemented it now?
* If we find a way to transmit this information from sender to receiver in an independent way, then the rest of the existing flow could be used. If we borrow the idea of late amount binding from the original proposal, then we only have to transmit the asset IDs chosen, which we could maybe compress into a single 32-byte value (bloom filter or other approach that compresses multiple IDs into a single value that is still reasonably easy to deconstruct into individual values if the set of all possible values is known)
Maybe also something like minisketch? Though I don't think we can assume that there is any overlap.
AFAIU we've thought of two tradeoffs - add another half-round of communication (independent of address format changes), or add an on-chain output (indpendent of BIP changes). I'm not sure which approach we prefer, or if there's a third way.
For adding the half-round, we also remove the ability to trigger the receive process by (only) watching the chain vs. increasing cost for these types of transfers.
@Roasbeef writes the following on late binding split asset amounts:
The asset inserted into the split commitment tree will instead carry the value distribution (it must by definition to prevent asset inflation). When validating the split inclusion proof this value will be assumed, and when spending the asset this is the value that'll be inserted into the input sum tree.
What is meant by a "value distribution" here?
What is meant by a "value distribution" here?
I understood this to be either the values used for each input to the send, or the full assetID -> value mapping used for the send.
I thought about guggero's idea upthread + 478 and using DH, and we could maybe add another RT between the receiver and the proof courier to enable DH and then send the assetID -> amount mapping that allows for the chain watching we have now.
The problem as-is is that the receiver has no pubkey from the sender to use for DH. We could use something like the unique ID scheme I outlined above to async send a pubkey from the sender to the receiver. From there, both parties can perform DH and use some private hashmail (or other) address to send the assetID -> amount mapping. Once the receiver fetches that, the proof courier can stop serving that entry, and the sender pubkey.
Side note: Do we also need to trasmit asset versions used from sender to receiver? Ex. they send with a mix of V0 and V1 assets. Maybe that should just be a send failure instead.
One downside with this approach is that all proof courier types would want to have some hashmail-style service for serving that sender pubkey and assetID -> amount mapping (or we could possibly use a single-level MS-SMT keyed on both the unique address ID and the DH shared secret). Either way it would expand the implementation for proof couriers.
Another wrinkle is that only one transfer to an address can be pending at the same time (no concurrent address re-use). I think that would also cause an issue with our proof courier setup as-is as well but its still a feature worth considering.
The full proposed flow:
-
Sender constructs the transfer TX and broadcasts.
-
Sender uses the script key of the first asset input,
firstInputScriptKey, to perform DH with the script key from the receiver address and generatesendSharedSecret. -
Sender uploads
firstInputScriptKeyto the proof courier at endpointaddrUniqueID. -
Sender uploads the assetID -> amount mapping,
outputLeafMap, to the proof courier at endpointsendSharedSecret. -
Sender is done at this point, and the two prior uploads could happen concurrently.
-
Receiver uses the existing backoff procedure to poll the proof courier at endpoint
addrUniqueID. Eventually they fetchfirstInputScriptKeyand finish polling. -
Receiver uses
firstInputScriptKeyto perform DH and generatesendSharedSecret. -
Receiver polls the proof courier at endpoint
sendSharedSecretforoutputLeafMap. Eventually they fetchoutputLeafMapand finish polling. -
Receiver uses
outputLeafMapto compute the full on-chain output they are receiving for the transfer, and begins watching the chain for this output. -
Receiver follows the existing procedure for detecting the transfer TX and then fetching the transfer proofs, verifying those, etc.
The end result is two more RTs between receiver and proof courier compared to what we have now, and more total data being temporarily stored by the proof courier. The # of RTs on the sender side are the same but carrying more data. There is also no indefinite polling like in my original solution.
Overall I think this isn't too bad wrt. implementation complexity or transfer overhead, and users could still swap in alternative proof courier backends to support this flow if needed.
Will move issue out of shaping given we've scoped all options here: https://github.com/lightninglabs/taproot-assets/issues/291#issuecomment-1862415461
and JHBs flow discussion: https://github.com/lightninglabs/taproot-assets/issues/291#issuecomment-1885266933
We need to discuss a practical use case regarding the bridge which we are creating for Taproot Assest USDT and lightning USDT. As we receive USDT we need to mint lightning USDT and similarly when we send we need to burn Lightning USDT Now since burning API needs asset Id not group Id, so how can we ensure that which asset id to burn. For example suppose Client 1 brings in 100 USDT we mint 100 Lightning USDT with group key as G1 and asset id as "asset_1_xxxxx". Client 2 brings in 50 USDT we mint another 50 Lightning USDT with group key as G1 but will get another asset id as "asset_2_xxxxx" So speed has 150 USDT in ethereum similarly 150 in Lighting USDT Now if transactions happen at lightning layer and client 1 sends 70 Lightning USDT to client 2 with group key G1**. So balance of Client 1 ; 30 USDT Client 2 : 120 USDT Now client 2 takes out payout of 120 USDT on Ethereum chain, now we need to burn 120 Lightning USDT***. Issues ** No API to send and receive via group key, it is based on asset id, and the balance of our Asset USDT is spread across multiple asset ids, so we will not be able to go through this transaction *** Burn API has asset id, since balance of USDT is spread across multiple asset id, how will we burn specific amount of USDT in order to keep pegged value same. *** Also complete corpus can't be burn as per docs of taproot assets. **** Also while minting the batch need to be marked as finalized by calling finalize and we need to wait for the batch state to change from pending to finalized, this may take some time in that duration our pegged value will be less, since we will have more USDT on eth, and less on lighting, similarly is there any case when batch finalize may give error, so how to handle such cases. Conclusion Honestly the creation of multiple assets ids for same asset and having a group key is creating a lot of confusion at our end, and we are still struggling to understand whether group id is the most important id, if so then we think all APIs should work on group id rather then asset id.
Thank you for providing a detailed use case! It makes it much easier to discuss.
Client 2 brings in 50 USDT we mint another 50 Lightning USDT with group key as G1 but will get another asset id as "asset_2_xxxxx"
👍🏽
No API to send and receive via group key, it is based on asset id, and the balance of our Asset USDT is spread across multiple asset ids, so we will not be able to go through this transaction.
Right now this cannot be done with addresses, yes. It can be done interactively using vPSBTs (The bridge and Client 2 would need to talk to each other a few times).
We have considered the handling of group keys as part of adding asset channel support, and the tl;dr is that this shouldn't be a problem for asset channels. The price quotes for the asset (USDT from your example) would use a group key, not an asset ID. This would solve the issue of manually picking asset IDs. We're still in the process of implementing this though. But the API to handle that should be much easier to use that vPSBTs.
*** Burn API has asset id, since balance of USDT is spread across multiple asset id, how will we burn specific amount of USDT in order to keep pegged value same.
Good point! IIUC we could change that to support group keys, or at least multiple burns in one TX.
**** Also while minting the batch need to be marked as finalized by calling finalize and we need to wait for the batch state to change from pending to finalized, this may take some time in that duration our pegged value will be less, since we will have more USDT on eth, and less on lighting.
Yes, you can only burn as fast as you can get a TX confirmed. IMO this is the same as having the price change while your TX is unconfirmed; I'm not sure if there is a straightfoward solution, maybe RBF or some other scheme for TX replacement could work?
Honestly the creation of multiple assets ids for same asset and having a group key is creating a lot of confusion at our end, and we are still struggling to understand whether group id is the most important id, if so then we think all APIs should work on group id rather then asset id.
A fair point; we're definitely keeping this in mind for the next release! For the case you mentioned, you'd definitely want to be using group keys as you'll be issuing more units of the asset many times.
#874 is the proposed path forward for this, h/t @ffranr for the further discussion to narrow down the options.
I think we should close this issue in favour of https://github.com/lightninglabs/taproot-assets/issues/874 .
There are other solutions to this problem that we could implement. For example, opting for a larger on-chain footprint and simpler proof couriers with less round trips or no polling. That sort of solution could be described in another issue.
@dstadulis do agree that we can now close this issue as complete in favour of #874 ?