#2995 followups
Closes #3191.
We address two minor follow-ups for #2995.
Codecov Report
Attention: Patch coverage is 61.90476% with 16 lines in your changes missing coverage. Please review.
Project coverage is 91.31%. Comparing base (
e4017c4) to head (53a616b). Report is 141 commits behind head on main.
| Files | Patch % | Lines |
|---|---|---|
| lightning/src/onion_message/messenger.rs | 60.00% | 16 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## main #3193 +/- ##
==========================================
+ Coverage 89.74% 91.31% +1.57%
==========================================
Files 122 129 +7
Lines 101921 117718 +15797
Branches 101921 117718 +15797
==========================================
+ Hits 91467 107495 +16028
+ Misses 7768 7591 -177
+ Partials 2686 2632 -54
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Excuse the delay here, finally addressed the outstanding commentl. Let me know if I can squash.
Please squash, yea.
Please squash, yea.
Squashed without further changes.
Force-pushed with added notifies, although had to rebase on main to make the notifier available.
Btw, I still think we might need to add a sleep (preferably with back-off?) in the BP as always immediately notifying (e.g. on persistence failure) might result in ~busy waiting if the failure reason persists, no?
It does, though it should usually be fine - writes shouldn't immediately fail unless we're like out of disk space, at which point we should really panic and crash not keep trying. Mostly the errors I assume will be used by remote persistence, which will naturally sleep while we try to connect to the host that isn't responding. Those that use it cause of things like out of space will suffer, but I'm not sure how much we can do to save them - we could sleep 1ms to avoid the busy-wait but their app is now malfunctioning and not working despite appearing kinda normal :/.
It does, though it should usually be fine - writes shouldn't immediately fail unless we're like out of disk space, at which point we should really panic and crash not keep trying. Mostly the errors I assume will be used by remote persistence, which will naturally sleep while we try to connect to the host that isn't responding. Those that use it cause of things like out of space will suffer, but I'm not sure how much we can do to save them - we could sleep 1ms to avoid the busy-wait but their app is now malfunctioning and not working despite appearing kinda normal :/.
Yeah, I agree that for starters the current approach should be fine, although I think there are several improvements we could consider doing as follow-ups / when we discover users really require them.
Another one would be that we now kind of require event handling idempotence which might not always be trivial to assert without keeping some state on to what extent you already handled an event previously. I could imagine that somewhere down the line users might benefit from the introduction of a unique event id, for example.
In any case, going ahead and landing this for now.