aztec-packages icon indicating copy to clipboard operation
aztec-packages copied to clipboard

feat: WIP integrate batched blobs into l1 contracts + ts

Open MirandaWood opened this issue 8 months ago • 0 comments

Finalises integration of batched blobs

mw/blob-batching-integration adds batching to the rollup .nr circuits only (=> will not run in the repo). This PR brings those changes downstream to the typescript and L1 contracts. Main changes:

  • L1 Contracts:
    • No longer calls the point evaluation precompile on propose, instead injects the blob commitments, check they correspond to the broadcast blobs, and stores them in the blobCommitmentsHash
    • Does not store any blob public inputs apart from the blobCommitmentsHash (no longer required)
    • Calls the point evaluation precompile once on submitEpochRootProof for ALL blobs in the epoch
    • Uses the same precompile inputs as pubic inputs to the root proof verification alonge with the blobCommitmentsHash to link the circuit batched blob, real L1 blobs, and the batched blob verified on L1
  • Refactors mock blob oracle
  • Injects the final blob challenges used on each blob into all block building methods in orchestrator
  • Accumulates blobs in ts when building blocks and uses as inputs to each rollup circuit
  • Returns the blob inputs required for submitEpochRootProof on finaliseEpoch()
  • Updates nr structs in ts plus fixtures and tests

TODOs/Current issues

  • ~When using real proofs (e.g. yarn-project/prover-client/src/test/bb_prover_full_rollup.test.ts), the root rollup proof is generated correctly but fails verification checks in bb due to incorrect number of public inputs. Changing the number correctly updates vks and all constants elsewhere, but bb does not change.~ EDIT: solved - must include the is_inf point member for now (see below TODO)
  • ~The Prover.toml for block-root is not executing. The error manifests in the same way as that in https://github.com/AztecProtocol/aztec-packages/pull/12540 (but may be different).~ EDIT: temporarily fixed - details in this repro (#14381) and noir issue (https://github.com/noir-lang/noir/issues/8563).
  • BLS points in noir take up 9 fields (4 for each coordinate as a limbed bignum, 1 for the is_inf flag) but can be compressed to only 2. For recursive verification in block root and above, would it be worth the gates to compress these? It depends whether the gate cost of compression is more/less than gate cost of recursively verifying 7 more public inputs.

PR Stack

  • [ ] mw/blob-batching <- main feature
  • [ ] ^ mw/blob-batching-bls-utils <- BLS12-381 bigcurve and bignum utils (noir) (#13583)
  • [ ] ^ mw/blob-batching-bls-utils-ts <- BLS12-381 bigcurve and bignum utils (ts) (#13606)
  • [ ] ^ mw/blob-batching-integration <- Integrate batching into noir protocol circuits (#13817)
  • [x] ^ mw/blob-batching-integration-ts-sol <- Integrate batching into ts and solidity (#14329)

MirandaWood avatar May 15 '25 10:05 MirandaWood