celestia-node
celestia-node copied to clipboard
feat: add support for Blobstream API
Closes https://github.com/celestiaorg/celestia-node/issues/3361
Codecov Report
Attention: Patch coverage is 40.98712% with 275 lines in your changes missing coverage. Please review.
Project coverage is 45.55%. Comparing base (
2469e7a) to head (1183e46). Report is 144 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #3470 +/- ##
==========================================
+ Coverage 44.83% 45.55% +0.71%
==========================================
Files 265 280 +15
Lines 14620 15848 +1228
==========================================
+ Hits 6555 7219 +664
- Misses 7313 7798 +485
- Partials 752 831 +79
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@renaynay raised a good point i agree with that choosing a more generic name for this module and API, something other than "blobstream", might be better. As although it is for the purpose of supporting blobstream initially, it can and should serve a generic purpose.
Any suggestions for better names @renaynay @adlerjohn @liamsi?
@vgonkivs also mentioned potentially moving it into the blob module itself i believe
Any suggestions for better names @renaynay @adlerjohn @liamsi?
Yeah, I also raised this somewhere else. So ideally:
- anything shares related would be folded into the shares "module"
- hence, ProveShares and ProveCommitment could be moved into the shares module
- even better: shares.GetProof would actually accommodate ProveShares
- GetDataCommitment and DataRootInclusionProof are really very blobstream specific but they could live in a "blocks" module as they operate over a range of blocks/block heights. The closest we have for this is the header module.
Nodebuilder folder is using only for the modules creation. Maybe you should add a blobstream package and move all logic there?
@insiderbyte where do you mean I should put the blobstream package? in the root?
@rach-id so there is a general desire to follow @liamsi 's comment about decomposing this, such that instead of "blobstream" being a whole module, the new API functionality lives in each of their relative existing modules, with some functionality moved to another package (like the "blocks" one that he also suggested). There was a promise that everyone would chime in on reviews and suggestions of where to place things, I'll chase that - but we can work on together too if you want to suggest a diff layout...
@ramin Got you 👍 I was under the impression that this PR should be merged, and then we do the refactor. However, if we want to do it here, I'm fine with that.
The proposed structure would be:
headermodule:GetDataCommitmentGetDataRootInclusionProofwhich should maybe be renamed toGetDataRootTupleInclusionProofbecause we're proving the data root tuple, i.e.(height, data_root)and not just the data root
blobmodule:ProveCommitment: or we replace in this PR evenGetProofto return the correct proof instead of the current one. Discussed in here: https://github.com/celestiaorg/celestia-node/pull/3470#discussion_r1645208264
sharemodule:ProveShares
If everyone agrees, I can proceed with the refactor.
I like header better than block.
agreed with proposed structure
@renaynay @Wondertan blobstream module re-introduced and endpoints removed from header module. Please review when you have time :D
This will probably only be run by a single machine
This will be run by zk-Blobstream provers, we now have:
- BlobstreamX with some operators
- Blobstream0: close to production
And also by users/validators in case of a fraud proof. For the commitment proof, it can also be used by sovereign rollups in their fraud proofs (still to be defined/discussed). So, there are a few parties that will use these endpoints.
We don't need to bloat the repo with this added module. I feel like it can be implemented completely outside (in the blobstreamX relayer code) using the header module and constructing the merkle tree from the results
I don't think it's bloating the repo. The share proof and commitment proof belong to this repo. For the GetDataRootTupleRoot and GetDataRootTupleInclusionProof, they can be implemented somewhere else (I did the first implementation here: https://github.com/celestiaorg/blobstream-node-api/pull/1), but Blobstream is part of the Celestia protocol in the short-mid term, so it's fine to have its API as part of the official implementation of node.
Although I think we should not use the word Commitment publicly because it's not clear what it means. I would advocate GetInclusionProof
For this, it will replace the blob.Proof in here in a subsequent PR. We just didn't want to add more work to this PR as it's been open for such a long time. Check here for corresponding issue: https://github.com/celestiaorg/celestia-node/issues/3582
For the commitment proof, it can also be used by sovereign rollups in their fraud proofs (still to be defined/discussed).
I think the commitment proof (and share proof) makes sense. It's a generic use case that you want to prove inclusion of the blob (and shares). My original push back was to adding the blobstream module. Can't that logic be run by the "relayer", "prover" which makes the calls to the node's API rather than being inside the node itself.
but Blobstream is part of the Celestia protocol in the short-mid term
I don't view Blobstream as core to the DA protocol. It's a service built "on top off" Celestia.
Can't that logic be run by the "relayer", "prover" which makes the calls to the node's API rather than being inside the node itself.
you're right, it can. I just checked our current implementations of Blobstream, BlobstreamX and Blobstream0, and they both rewrite that logic, so they're not using these endpoints. So the main usage of those two endpoints would be when generating an inclusion proof to verify the inclusion of a blob.
So we have two options:
- Provide this API with the proof and users can just use it + it's self-contained a can be removed from node anytime in the future
- Or, we delete it and write some specs for the endpoints. Then, whenever someone is building a rollup, we tell them to re-implement that using the specs.
I'll take this discussion to slack and we can get other's opinion.
Thanks for flagging this 👍
Another future option is for when the node actually becomes pluggable. You will be able to make a module that lives outside the node but can be integrated into the node like its part of it. Basically, there will be enhanced nodes with some additional application-specific functionality depending on module configuration. So, we get seamless integration into the node without needing the node team to maintain all the nonstandard modules.