lighthouse
lighthouse copied to clipboard
Upgrade to c-kzg 2.1.0 and alloy-primitives 1.0
Status: Sadly upgrading c-kzg brings a performance degradation that let the simulator CI job failing. Increasing GENESIS_DELAY helps, but I wonder why this is happening now.
Issue Addressed
Update c-kzg from v1 to v2. My motivation here is that alloy-consensus now uses c-kzg in v2 and this results in a conflict when using lighthouse in combination with latest alloy. I tried also to disable the czkg feature in alloy, but the conflict persisted.
See here for the alloy update to c-kzg v2: https://github.com/alloy-rs/alloy/pull/2240
Error:
error: failed to select a version for `c-kzg`.
...
versions that meet the requirements `^1` are: 1.0.3, 1.0.2, 1.0.0
the package `c-kzg` links to the native library `ckzg`, but it conflicts with a previous package which links to `ckzg` as well:
package `c-kzg v2.1.0`
... which satisfies dependency `c-kzg = "^2.1"` of package `alloy-consensus v0.13.0`
... which satisfies dependency `alloy-consensus = "^0.13.0"` of package ...
...
Proposed Changes
- Upgrade
alloy-consensusto0.14.0and disable all default features - Upgrade
c-kzgtov2.1.0 - Upgrade
alloy-primitivesto1.0.0 - Adapt the code to the new API
c-kzg - There is now
NO_PRECOMPUTEas my understand from https://github.com/ethereum/c-kzg-4844/pull/545/files we should use0here asnew_from_trusted_setup_no_precompdoes not precomp. But maybe it is misleading. For all other places I usedRECOMMENDED_PRECOMP_WIDTHbecause8is matching the recommendation. BYTES_PER_G1_POINTandBYTES_PER_G2_POINTare no longer public inc-kzg- I adapted two tests that checking for the
Attestationbitfield size. But I could not pinpoint to what has changed and why now 8 bytes less. I would be happy about any hint, and if this is correct. I found related a PR here: https://github.com/sigp/lighthouse/pull/6915 - Use same fields names, in json, as well as
c-kzgandrust_eth_kzgforg1_monomial,g1_lagrange, andg2_monomial
Additional Info
- The copy and flattening into the data structure for the
KzgSettingsseems suboptimal. The whole data structure could be already prepared for this like inc-kzg. - It seem that the
c-kzglib would provide already those constants (e.g. withc_kzg::ethereum_kzg_settings) and maybe theTrustedSetupcould be simplified in the future (e.g. removing it and provide the user to load customKzgSettingsfrom json.
PS: I did not tested this with a real network, I hope that the test coverage from other crates would highlight if kzg is broken with this change. I just compared by hand that (e.g. g1_monomial) is correctly read. I am open to add test for the correct loading of the kzg, but maybe it make sense to seek to a deeper integration of c-kzg in another step, because it requires much more changes. Or it will be removed with this issue here: https://github.com/sigp/lighthouse/issues/6107#issuecomment-2632542447
Thank you for taking the time to have a look. It’s not a priority. I came across this as a transient dependency and initially thought updating the library would be a quick win. But given the issues that have come up, that doesn’t seem to be the case. I’m fine with either closing this PR or leaving it open until there’s a decision on whether c-kzg will be removed.
I'm going to try a narrower and more targeted cargo update, as the Cargo.lock file is currently very noisy and hard to review.
Hi @cakevm, this pull request has been closed automatically due to 30 days of inactivity. If you’d like to continue working on it, feel free to reopen at any time.
I did now some research and guess I have maybe found something, that could cause this slower runtime in the tests with version 2. It looks like that the initialization behavior has changed to always init FK20 and support cell operations, which would not be required here. But lead to a longer init time. For more details see: https://github.com/ethereum/c-kzg-4844/issues/580
Hi @cakevm, this pull request has been closed automatically due to 30 days of inactivity. If you’d like to continue working on it, feel free to reopen at any time.
I would still like to merge this
This pull request has been removed from the queue for the following reason: pull request dequeued.
Pull request #7271 has been dequeued. The pull request could not be merged. This could be related to an activated branch protection or ruleset rule that prevents us from merging. (details: 1 review requesting changes and 1 approving review by reviewers with write access.).
You can check the last failing draft PR here: #7712.
You should look at the reason for the failure and decide if the pull request needs to be fixed or if you want to requeue it.
If you do update this pull request, it will automatically be requeued once the queue conditions match again.
If you think this was a flaky issue instead, you can requeue the pull request, without updating it, by posting a @mergifyio requeue comment.
@mergify requeue
requeue
✅ The queue state of this pull request has been cleaned. It can be re-embarked automatically
I don't see why it failed, retrying
This pull request has been removed from the queue for the following reason: checks failed.
The merge conditions cannot be satisfied due to failing checks:
You can check the last failing draft PR here: #7714.
You may have to fix your CI before adding the pull request to the queue again.
If you update this pull request, to fix the CI, it will automatically be requeued once the queue conditions match again.
If you think this was a flaky issue instead, you can requeue the pull request, without updating it, by posting a @mergifyio requeue comment.
This pull request has been removed from the queue for the following reason: pull request dequeued.
Pull request #7271 has been dequeued. The pull request could not be merged. This could be related to an activated branch protection or ruleset rule that prevents us from merging. (details: 1 review requesting changes and 1 approving review by reviewers with write access.).
You can check the last failing draft PR here: #7715.
You should look at the reason for the failure and decide if the pull request needs to be fixed or if you want to requeue it.
If you do update this pull request, it will automatically be requeued once the queue conditions match again.
If you think this was a flaky issue instead, you can requeue the pull request, without updating it, by posting a @mergifyio requeue comment.