actor: add new package for structured concurrency based on the Actor model
In this PR, we add a new package that implements structured concurrency patterns using the actor model. The README should give a good overview of how the API works, and the major abstractions at play. I'll focus this PR body on the motivation behind introducing such a model, and some of my goals.
Motivation
While working on the new RBF close based channel, I ran into a bug when I went to test the restart scenario.
Typically, the RPC server will contact the switch to do a coop close. The switch holds the link, which has a call back passed into to trigger a coop close via the peer. At the end of this series of calls, we create a new chan closer which uses the underlying channel, and also the new RBF state machine to drive the coop close process: https://github.com/lightningnetwork/lnd/blob/3707b1fb7032d98dc2a8e881d3ae83e61fbc35e3/rpcserver.go#L2811-L2831
Unlike the existing chan closer, after a restart, the rbf chan closer is free to trigger additional fee updates. However, as we don't load non active channels back into the switch, the RPC server wasn't able to rely on the switch to send the messages it needed.
To get around this, I had to add 3 new methods to do the pointer chasing of: rpcServer -> server -> peer -> active close map -> rbf closer: https://github.com/lightningnetwork/lnd/blob/3707b1fb7032d98dc2a8e881d3ae83e61fbc35e3/server.go#L5521-L5548
https://github.com/lightningnetwork/lnd/blob/3707b1fb7032d98dc2a8e881d3ae83e61fbc35e3/server.go#L5482-L5519
https://github.com/lightningnetwork/lnd/blob/3707b1fb7032d98dc2a8e881d3ae83e61fbc35e3/peer/brontide.go#L5355-L5392
Across the codebase, this is a common pattern wherein we'll go from the rpcServer to the server, which is effectively a "god struct" that contains pointers to all the other sub-systems that we may need to access. In the case of the version before this PR, as the life cycle of some sub-systems shifted, we had to unravel a few abstractions to be able to send a message to the sub-system at play. This requires quite a bit of knowledge on the behalf of the RPC server: it needs to know which sub-systems manage others, their life cycles, how they're created, the methods to call, etc, etc.
The Actor Solution
As we'll see in a follow up PR, the new actor package lets the peer expose a new actor that can handle the RPC specific logic for the close bump process. So instead of doing this pointer chasing thru the god struct, which adds a degree of tight coupling, the rpcserver just needs to create the ServiceKey that it advertised at the package level. It can then use this to send a message directly to the new rbf close actor.
This achieves a greater degree of loose coupling, and the abstractions lend well to experimentation and composition of various actors.
Broadly in the codebase, we already implement message passing between event loops managed by goroutines where the state is a local variable. This package codifies that pattern, and creates a more standardized way of allowing these event loops to interact with each other. It also provides more flexibility, as the sub-system boundaries are based on messages instead of methods.
Future[T]
IMO the Future[T] abstraction added in this PR is also a very nice abstraction that wraps the typical send/recv timeout behavior we have elsewhere across the codebase. Instead of directly returning the channel (allows for anti-patterns such as blocking forever w/ no context), we can return Future[T] in place.
[!IMPORTANT]
Review skipped
Auto reviews are limited to specific labels.
:label: Labels to auto review (1)
- llm-review
Please check the settings in the CodeRabbit UI or the
.coderabbit.yamlfile in this repository. To trigger a single review, invoke the@coderabbitai reviewcommand.You can disable this status message by setting the
reviews.review_statustofalsein the CodeRabbit configuration file.
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
🪧 Tips
Chat
There are 3 ways to chat with CodeRabbit:
- Review comments: Directly reply to a review comment made by CodeRabbit. Example:
-
I pushed a fix in commit <commit_id>, please review it. -
Explain this complex logic. -
Open a follow-up GitHub issue for this discussion.
-
- Files and specific lines of code (under the "Files changed" tab): Tag
@coderabbitaiin a new review comment at the desired location with your query. Examples:-
@coderabbitai explain this code block. -
@coderabbitai modularize this function.
-
- PR comments: Tag
@coderabbitaiin a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:-
@coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase. -
@coderabbitai read src/utils.ts and explain its main purpose. -
@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format. -
@coderabbitai help me debug CodeRabbit configuration file.
-
Support
Need help? Create a ticket on our support page for assistance with any issues or questions.
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.
CodeRabbit Commands (Invoked using PR comments)
-
@coderabbitai pauseto pause the reviews on a PR. -
@coderabbitai resumeto resume the paused reviews. -
@coderabbitai reviewto trigger an incremental review. This is useful when automatic reviews are disabled for the repository. -
@coderabbitai full reviewto do a full review from scratch and review all the files again. -
@coderabbitai summaryto regenerate the summary of the PR. -
@coderabbitai generate docstringsto generate docstrings for this PR. -
@coderabbitai generate sequence diagramto generate a sequence diagram of the changes in this PR. -
@coderabbitai resolveresolve all the CodeRabbit review comments. -
@coderabbitai configurationto show the current CodeRabbit configuration for the repository. -
@coderabbitai helpto get help.
Other keywords and placeholders
- Add
@coderabbitai ignoreanywhere in the PR description to prevent this PR from being reviewed. - Add
@coderabbitai summaryto generate the high-level summary at a specific location in the PR description. - Add
@coderabbitaianywhere in the PR title to generate the title automatically.
CodeRabbit Configuration File (.coderabbit.yaml)
- You can programmatically configure CodeRabbit by adding a
.coderabbit.yamlfile to the root of your repository. - Please see the configuration documentation for more information.
- If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation:
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
Documentation and Community
- Visit our Documentation for detailed information on how to use CodeRabbit.
- Join our Discord Community to get help, request features, and share feedback.
- Follow us on X/Twitter for updates and announcements.
Pull reviewers stats
Stats of the last 30 days for lnd:
| User | Total reviews | Time to review | Total comments | |
|---|---|---|---|---|
| guggero 🥇 |
24 ▀▀▀ |
23h 49m |
47 ▀▀▀ |
|
| ziggie1984 🥈 |
13 ▀ |
12h 35m |
35 ▀▀ |
|
| bhandras 🥉 |
11 ▀ |
4h 34m |
12 ▀ |
|
| yyforyongyu |
10 ▀ |
1d 3h 57m |
16 ▀ |
|
| Roasbeef |
7 ▀ |
9h 39m |
4 |
|
| ellemouton |
5 ▀ |
1d 6h 18m |
5 |
|
| bitromortac |
5 ▀ |
1h 41m |
6 |
|
| morehouse |
3 |
1d 1h 19m |
3 |
|
| ffranr |
2 |
18m |
0 |
|
| mohamedawnallah |
2 |
6d 14h 50m ▀▀ |
11 ▀ |
|
| NishantBansal2003 |
2 |
5d 15h 32m ▀▀ |
0 |
|
| sputn1ck |
1 |
23h 39m |
2 |
|
| GeorgeTsagk |
1 |
3d 36m ▀ |
0 |
|
| saubyk |
1 |
20h 37m |
0 |
|
| MPins |
1 |
8d 14h 1m ▀▀▀ |
3 |
Used this recently during the PB Hackathon. It worked pretty well. Decomposing the pipeline into dedicated actors allowed for incremental implementation during the time crunch (24 hr hackathon, but we made our solution in ~6 of actual coding). Only one instance of each actor was present in the final solution, but we could easily scale out the any of them (in particular the ChatActor and OptimizerActor) to increase parallelism. If we were to modify the FuzzExecutorActor to run instead using something like managed docker containers, the end messages say the same so no updates are needed elsewhere.
Here's a diagram describing the final architecture:
flowchart LR
CLI(User/CLI) --> Engine(FuzzEngine)
Engine -- "1\. Starts" --> Controller(FuzzControllerActor)
subgraph Phase1["Phase 1: Initial Generation"]
Controller -- "2\. Send Program" --> Writer(FuzzWriterActor)
Writer -- "3\. Request" --> ChatActor(ChatActor)
ChatActor -- "4\. Generate" --> Writer
Writer -- "5\. Provide Test" --> Controller
end
subgraph Phase2["Phase 2: Iterative Loop"]
Controller -- "6\. Execute" --> Executor(FuzzExecutorActor)
Executor -- "7\. Return Results" --> Controller
Controller -- "8\. Analyze" --> Optimizer(FuzzOptimizerActor)
Optimizer -- "9\. Request" --> ChatActor
ChatActor -- "10\. Improve" --> Optimizer
Optimizer -- "11\. Return" --> Controller
Controller -- "12\. Repeat" --> Executor
end
Controller -- "13\. Finalize" --> Report(Final Report)
One thing missing (especially important for the distributed case) is any sort of active queue management. Right now we'll just block forever, or until the context gets cancelled. We could update this to drop the message down to a RetryActor that'll try to deliver the message until restart. Ofc, we can also just rely on the caller to handle that themselves.
/gemini review
@gijswijs: review reminder @roasbeef, remember to re-request review from reviewers when ready