rollmint
rollmint copied to clipboard
Separate signed header and data to different blobs
fixes #829
major changes are related to da/ package and dalc.proto:
proto/dalc/dalc.proto
- splits various block related types to header and data
da/da.go
- splits block submit, checkavail, and retrieve methods to header and data
- accept two namespace ids (header & data) in the dalc init (also check config/config.go, config/defaults.go, config/config_test.go)
- check the changes in the implementations
- da/celestia/celestia.go
- fairly straightforward split of methods from block to (header and data)
- da/celestia/mock/server.go
- this has bit crazy logics to handle differentiating header & data methods that both arrive into the common apis like submit_pfb, namespaced_shares and namespaced_data (uses namespace id to differentiate)
- da/grpc/grpc.go & da/grpc/mockserv/mockserv.go
- fairly straightforward split of methods from block to (header and data)
- da/mock/mock.go
- has some additional logic to handle mocking the DA. although it is not important, as it is just a mock and the logic does not necessarily 1-to-1 translate to actual DA, some of the logics are questionable.
- update to da/test/da_test.go
- da/celestia/celestia.go
next important change is in the manager.go which handles this changed logic
- remove namespaceId from NewBlockExecutor (as it was not used)
- fetchBlock returns header and data response instead of block response as it makes two separate calls to RetrieveBlockHeaders and RetrieveBlockDatas
- change submitBlockToDA to accept additional parameter
onlyHeaderto fire either SubmitBlockHeader or SubmitBlockData. the idea here is that, we first submit the data, get the data committment (right now it is just Data.Hash(), but later it will be changed to DA committment), insert that into the header before signing and then submit the signed header to DA in the second call to submitBlockToDA
node/full.go (node/full_client_test.go, node/full_node_integration_test.go)
- changes related to accepting two namespace ids
types/hashing.go
- adds Hash() to Data, which is used temporarily as Data committment and included in the header's data hash field, which is checked in the ValidateBasic (types/validation.go)
types/serialization.go
- adds missing serialization methods (UnmarshalBinary, FromProto) to Data
state/executor.go (state/executor_test.go)
- removed namespaceID which is not used in the executor
Codecov Report
Attention: Patch coverage is 47.76786% with 702 lines in your changes missing coverage. Please review.
Project coverage is 55.29%. Comparing base (
ee5582f) to head (47ce34b). Report is 478 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #858 +/- ##
==========================================
- Coverage 56.11% 55.29% -0.82%
==========================================
Files 66 66
Lines 10705 11727 +1022
==========================================
+ Hits 6007 6485 +478
- Misses 3830 4279 +449
- Partials 868 963 +95
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
why separate namespaces?
why separate namespaces?
we can have faster queries and efficient processing of headers if you only need headers and not data. @nashqueue have some ideas for future utility for the common header namespace.
@S1nus any downside to this?
Follow-up Issues:
- Before Blob Module:
- Make Hash not a response to DA but calculated on the fly
- Revert hash a response in proto
- Revert changes to submitBlockToDA
- After 2 PFB, one Blob node-support
- Change the Logic of submitting header/data together in a package instead of submitting it individually. (Maybe we should think about if we want to aggregate all of the blob submissions into one PFB per DA height )
- After Blob Module
- Instead of using the Hash function, we should use the commitment provided by Celestia
@nashqueue @tzdybal re-review required.
Removed the hash dependency: we directly compute the hash and insert to header dataHash field before signing. while doing this, came across a bug where we were signing the header with dataHash which is obtained before updating the ISRs to block data (meaning it was empty when we data.Hash() and insert to header and sign). To fix this, I had to refactor the ApplyBlock (@tuxcanfly), which mainly moves the block.validate outside of ApplyBlock such that we can call the validate before apply block for syncing and call the validate after apply block in case of publishing the new block. this way, while publishing the new block, we can first apply the created block to compute ISRs (this also solves @tuxcanfly issue were we can have the updated NextValidators to be inserted in to the header before signing it), compute the data.Hash() with the updated ISRs and then insert it to header.DataHash field and sign it.
also, came across a potential issue: Header and data blob separation assumes same da height (#913)
note that, I am hesitant to merge the two APIs (SubmitBlockHeader and SubmitBlockData) as it will create many issues to design the mock correctly. if this is a must, we can address it in the separate issue.
New commits make sense. Only 1 small comment. As this is breaking so much, we should test it out thoroughly. Run a Gm / Ethermint tutorial as a sanity check.
Also, protobuf linter complains.
- will run the tutorial.
- protobuf linter complains are not valid, it is asking for expansion of proto messages for the two types (header and data), but it is not necessary. for example, it is asking me to expand
SubmitBlockResponse(https://github.com/gupadhyaya/rollkit/blob/separate_blobs/proto/dalc/dalc.proto#L29) toSubmitBlockHeaderResponseandSubmitBlockDataResponse. this is not really needed as both message will have same fieldDAResponse result = 1;and the newly created types will be redundant.
while testing the actual celestia DA, the block data submission fails with ERR DA layer submission failed error=",cannot use zero blob size" attempt=1 module=BlockManager. the problem is that, when there is no txs in the block, the block.Data is empty and trying to submit zero bytes to celestia DA fails. this was not an issue earlier as we were submitting the whole block with SignedHeader and Data, where even when the txs dont exist, header won't be empty.
hopefully the new tx, isr format (#885) fix the problem, where even when the txs empty, isr will wrap the block.Data to make it non-empty, hence we always have non-zero bytes to submit to DA. is this assumption correct @Manav-Aggarwal ?
@gupadhyaya This does not unblock the separation of block/header, as we would need to enable fraud proofs. But for pessimistic chains, we don't need the ISR overhead. We wouldn't need the header referencing the block, as there are no light nodes. You need full nodes to get full security for pessimistic mode.
Can we post an empty share instead? Like just a 0? Would this deserialize correctly?
@gupadhyaya This does not unblock the separation of block/header, as we would need to enable fraud proofs. But for pessimistic chains, we don't need the ISR overhead. We wouldn't need the header referencing the block, as there are no light nodes. You need full nodes to get full security for pessimistic mode.
agreed, we should make sure things are consistent with pessimistic mode as well