rust-lightning
rust-lightning copied to clipboard
Use unique per-node "node_counter"s rather than a node hashmap in routing
During routing, we spend most of our time doing hashmap lookups. It turns out, we can drop two of them, the first requires a good bit of work - assigning each node in memory a random u32 "node counter", we can then drop the main per-node routefinding state map and replace it with a vec. Once we do that, we can also drop the first-hop hashmap lookup that we do on a per node basis as we walk the network graph, replacing it with a check in the same vec.
This is the first in a series of PRs that, in total, substantially more than double our routefinding performance with real data. This first step optimizes the route-finder itself, with later steps more focused on the scorer.
~Based on #2802.~
CI's unhappy. Looks like there's some error in the code
Fixed.
Rebased.
:warning: Please install the to ensure uploads and comments are reliably processed by Codecov.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 89.78%. Comparing base (
78c0eaa) to head (f689e01).
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## main #2803 +/- ##
==========================================
- Coverage 89.80% 89.78% -0.02%
==========================================
Files 121 121
Lines 100045 100094 +49
Branches 100045 100094 +49
==========================================
+ Hits 89845 89869 +24
- Misses 7533 7555 +22
- Partials 2667 2670 +3
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Walkthrough
The project has undergone a significant update, focusing on efficiency and data integrity. The .github/workflows/build.yml file reflects updated paths and keys for network graph and scorer binaries, ensuring the latest versions are used. In the lightning source code, there's a refactoring for feature flag checks, structural optimizations for network graph storage, and scoring logic revisions to enhance performance. Additionally, new counters for node tracking in routing have been introduced, suggesting a move towards more detailed network analysis.
Changes
| File Path | Change Summary |
|---|---|
.github/workflows/build.yml |
Updated paths and keys for net graph and scorer binaries; new SHA sum checks added. |
.../src/ln/features.rs |
Refactored requires_unknown_bits method for efficient flag comparison. |
.../src/routing/gossip.rs |
Added node counters and restructured fields for cache optimization and consistency checks. |
.../src/routing/scoring.rs |
Altered decay_100k_channel_bounds function to use graph scorer and current time update. |
.../src/util/test_utils.rs |
Introduced node counters to routing structs for enhanced route tracking. |
🐇✨ To code we hop, with every commit,
A graph update, a refactor bit.
With scores and nodes, we weave the net,
Our binary tales, in silicon set. 🌐🔍
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?
Tips
Chat
There are 3 ways to chat with CodeRabbit:
- Review comments: Directly reply to a review comment made by CodeRabbit. Example:
I pushed a fix in commit <commit_id>.Generate unit testing code for this file.Open a follow-up GitHub issue for this discussion.
- Files and specific lines of code (under the "Files changed" tab): Tag
@coderabbitaiin a new review comment at the desired location with your query. Examples:@coderabbitai generate unit testing code for this file.@coderabbitai modularize this function.
- PR comments: Tag
@coderabbitaiin a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:@coderabbitai generate interesting stats about this repository and render them as a table.@coderabbitai show all the console.log statements in this repository.@coderabbitai read src/utils.ts and generate unit testing code.@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.@coderabbitai help me debug CodeRabbit configuration file.
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.
CodeRabbit Commands (invoked as PR comments)
@coderabbitai pauseto pause the reviews on a PR.@coderabbitai resumeto resume the paused reviews.@coderabbitai reviewto trigger an incremental review. This is useful when automatic reviews are disabled for the repository.@coderabbitai full reviewto do a full review from scratch and review all the files again.@coderabbitai summaryto regenerate the summary of the PR.@coderabbitai resolveresolve all the CodeRabbit review comments.@coderabbitai configurationto show the current CodeRabbit configuration for the repository.@coderabbitai helpto get help.
Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
CodeRabbit Configration File (.coderabbit.yaml)
- You can programmatically configure CodeRabbit by adding a
.coderabbit.yamlfile to the root of your repository. - Please see the configuration documentation for more information.
- If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation:
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
Documentation and Community
- Visit our Documentation for detailed information on how to use CodeRabbit.
- Join our Discord Community to get help, request features, and share feedback.
- Follow us on X/Twitter for updates and announcements.
Rebased.
I did a first high-level pass and added an initial round of questions. I have to say that I'm close to a concept NACK on this one: the router logic is hard to reason about as it is and we keep discovering bugs here. It seems to me that this PR significantly increases the code complexity and introduces several new angles how things can go wrong. While this seems to work just fine for now, I fear that we'll see more breakage in the router code as a consequence in the future. If we really want to go ahead with this, it would be great if we could find a better abstraction for our newly created data structure that would offer a foolproof API, e.g., so we don't for get to insert/remove reused counters in the corresponding list.
Fair, let me encapsulate the node counter logic and remove it from get_route and then we can see how we feel about it.
I have yet to run the benchmarks myself to see how much speedup this PR would gain us, but from my first impression I'm not convinced it's worth the increased risks and maintenance costs. Also, it seems that a good chunk of the performance improvements might come from the last few commits alone, which are optimizations that could be applied independently from switching to node counters?
Sadly not. The last few commits reduce the pressure we put on the branch predictor, and improve things a bit on the edges, but the vast majority of the gain here is dropping the hash table lookups. A very large portion of our total routing time is spent just doing hash table lookups directly (we have like 3 or 4 of them we index into in routing - the network graph, gossip data, dist, etc), so dropping one entirely is a huge win.
Okay, rebased on main. With the new struct I think its not that messy, and now it also lets us simplify some of the blinded path stuff too which I think is nice.
IMO, the node_counter changes could be split off to make the PR more focused. At the moment there's a lot bundled in here with the cache updates, more minor get_route optimizations, benchmarking updates and feature bit parsing.
Pulled smaller changes into #3103 and #3104.
Rebased.
CI is sad.
Fixed
It is all pretty trivial, but at least eg the feature optimization and the first hop cache thing could probably use another pair of eyes.
Gonna go ahead and land to get this done, but will tackle nits in a quick followup.