Client-side of static invoice server
As part of being an async recipient, we need to interactively build an offer and static invoice with an always-online node that will serve static invoices on our behalf in response to invoice requests from payers.
While users could build this invoice manually, the plan is for LDK to automatically build it for them using onion messages. See this doc for more details on the protocol. Here we implement the client side of the linked protocol.
See https://github.com/lightning/bolts/pull/1149 for more information on async payments.
Will go through the commits in a bit more detail before taking this out of draft, but conceptual feedback or feedback on the protocol itself is welcome, or the way the code is organized overall. It does add a significant amount of code to ChannelManager currently.
Hmmm, I wonder if we shouldn't allow the client to cache N offers rather than only 1. I worry a bit about the privacy implications of having One Offer that gets reused across different contexts.
I think that makes sense, so they would interactively build and cache a few and then randomly(?) return one of them on get_cached_async_receive_offer?
It seems reasonable to save for follow-up although I could adapt the AsyncReceiveOffer cache struct serialization for this now.
I think that makes sense, so they would interactively build and cache a few and then randomly(?) return one of them on get_cached_async_receive_offer?
Yea, I dunno what to do for the fetch'er, maybe we just expose the whole list?
It seems reasonable to save for follow-up although I could adapt the AsyncReceiveOffer cache struct serialization for this now.
Makes sense, tho I imagine it would be a rather trivial diff, no?
Going to base this on https://github.com/lightningdevkit/rust-lightning/pull/3640. Will finish updating the ser macros based on those changes and push updates here after finishing some review.
Pushed some updates after moving the async receive offer cache into the new OffersMessageFlow struct added in #3639.
Rebased on the latest version of #3639
Codecov Report
Attention: Patch coverage is 27.74566% with 125 lines in your changes missing coverage. Please review.
Project coverage is 89.55%. Comparing base (
24063e5) to head (0608de1). Report is 22 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #3618 +/- ##
==========================================
- Coverage 89.65% 89.55% -0.10%
==========================================
Files 164 165 +1
Lines 134658 134830 +172
Branches 134658 134830 +172
==========================================
+ Hits 120723 120751 +28
- Misses 11256 11395 +139
- Partials 2679 2684 +5
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
Pushed some minor fixes for CI.
Would be helpful to link an async payments umbrella issue in the PR description, to get an overview of where things are at in the grander scheme.
Updated in response to to @joostjager's comments. Link to diff
Rebased on main after #3639 landed
Rebased after #3767 landed, haven't addressed feedback yet
Mostly updated, need to fix linting. I added fixups for a few things to call attention/add a short explanation. Diff
Edit: fixed linting, diff
Addressed feedback with this diff
Updated in response to feedback. Github isn't giving me a nice diff link so made a pastebin: https://pastebin.com/JyLVcLWZ
Rebased on due to conflicts
Pushed an overhaul of the PR based on discussions with @TheBlueMatt last week. Diff. Planning to discuss in more detail over a sync with @joostjager next week.
Planning to push a few more updates today improving some docs and things.
Edit: pushed a few more docs updates and fixes. Diff
These are the items to save for follow-up PRs that I'm thinking:
- support the server configuring how many invoices they'll persist for the user (currently it's hardcoded to 10)
- proactively invalidate offers/invoices when a channel closes (or when fees change?)
- exponential backoff on the messages the client is sending the server?
Rebased on main to resolve minor conflicts in peer_handler.rs
Would be nice to move some logic from
flow.rstoasync_receive_offer_cache.rs.flow.rsis already pretty big and covers several largely-unrelated state machines. Keep the async caching state machine more in its own file would be nice.
@TheBlueMatt I looked into this, it seems reasonable but I think it needs a prefactor to move the async offer/static invoice creation utils into the async_receive_offer_cache first.
I guess if we move in your suggested direction we'd want to start with that prefactor + 5b567a1a1fa1afaef4fa29343cf90e43b2b2fe50 in a separate PR, rather than landing this one, since so much of the current code would be imminently moving otherwise?
My reasoning for liking the current separation of responsibilities was basically that I like how simple/focused the cache is right now, and the invoice/onion message creation bits seemed similar to the flow's other responsibilities (plus the flow already has all the blinded path utils/message router/queues/expanded key available to it). The flow ends up at around ~1500 LoC in this PR.
I can definitely see the argument for putting the async offer/message creation into the cache though since it is a pretty different state machine from BOLT 12.
I guess if we move in your suggested direction we'd want to start with that prefactor + https://github.com/lightningdevkit/rust-lightning/commit/5b567a1a1fa1afaef4fa29343cf90e43b2b2fe50 in a separate PR, rather than landing this one, since so much of the current code would be imminently moving otherwise?
Or we could land this one, since its where it is now, and then move the code later? Code moves are pretty cheap to review so its not like its adding a lot of review burden by doing this one first, no?
My reasoning for liking the current separation of responsibilities was basically that I like how simple/focused the cache is right now, and the invoice/onion message creation bits seemed similar to the flow's other responsibilities (plus the flow already has all the blinded path utils/message router/queues/expanded key available to it). The flow ends up at around ~1500 LoC in this PR.
Fair, and its nice to have a pretty encapsulated cache, but even 1500 LoC is pretty large, and importantly its several fairly unrelated "flows" :/. The cache currently also kinda leaks a bit, eg the new_offers_requested() method allows for external direct state manipulation in the cache, which always feels kinda awkward.
Updated in response to @joostjager's feedback and some other small fixes I noticed. Diff