nats.py icon indicating copy to clipboard operation
nats.py copied to clipboard

[DRAFT]: Connection Reliability Improvements

Open fielding opened this issue 8 months ago • 6 comments

Overview

This PR focuses on improving connection reliability in nats.py, addressing persistent stability issues observed in long-running applications or scenarios with intermittent network disruptions like the one we've been experiencing from one of our compute providers(lol). I was able to mitigate some of it, but not all of it with additional logic in our codebase that would help nats to reconnect in these instances and rebind the subscriptions, but honestly that was an ugly bandaid and it didn't work 100% of the time. So, after repeated encounters with elusive connectivity bugs and noting similar experiences among others in the community (#598), I decided to take a stab at helping improve the issue at the core.

Here's what this PR includes:

1. Improved Ping Loop Error Handling

  • Enhanced error handling within the ping loop to prevent silent failures.
  • Properly catches and handles asyncio.InvalidStateError.
  • Adds a catch-all exception handler to ensure the ping loop never silently stalls.
  • Forces a proper disconnect with ErrStaleConnection when ping anomalies occur.

2. Enhanced Read Loop Stability

  • Implements timeout detection for read operations, introducing a consecutive timeout counter to identify potentially stalled connections.
  • Adds a configurable client option (max_read_timeouts, defaults to 3) to fine-tune sensitivity.
  • Explicit handling for ConnectionResetError and asyncio.InvalidStateError to improve resilience and provide clearer debug information.

3. Reliable Request/Response Handling

  • Adds pre-flight connection health checks before issuing requests.
  • Improves internal cleanup for request/response calls to prevent subtle resource leaks.
  • Strengthens timeout and cancellation logic to guard against orphaned or stale futures.

4. Proactive Connection Health Checks

  • Introduces _check_connection_health(), a method designed to proactively test and re-establish the connection if necessary.
  • Utilized in critical paths like request handling to ensure robustness under varying network conditions.

Linked Issues

This PR addresses stability concerns raised in:

  • #598

Recommended Testing Scenarios

These improvements should noticeably enhance stability, particularly in environments with:

  • Long-running applications (24+ hours uptime)
  • Frequent or intermittent connectivity disruptions
  • Intensive request-response workloads
  • Heavy usage of JetStream or Key-Value operations

Impact and Compatibility

  • Backward Compatibility: Fully backward compatible. Existing interfaces remain unchanged.
  • Configuration: New options (max_read_timeouts) default to safe values and can be tuned as needed without affecting existing usage.
  • Robustness: Designed to gracefully handle and recover from various edge cases previously causing silent connection failures.

Contributing Statement

  • This contribution is my original work.
  • I license the work to the NATS project under the project's Apache 2.0 license.

As always, feedback is welcome, and I'm happy to iterate as needed!

Cheers,
Fielding

fielding avatar Mar 13 '25 18:03 fielding

I'm going to leave this as a draft until I can get some concrete data from my own testing in the wild or from others who might be having similar issues and can help to test.

If you want to install and test this commit, https://github.com/fielding/nats.py/commit/4a20463b521962c83ec58e0cc1a4d1f72fd98440, you can simply do:

pip install git+https://github.com/fielding/nats.py.git@4a20463b521962c83ec58e0cc1a4d1f72fd98440

or there is a way to install a specific PR by #, but I can't remember exactly what it is -.-

fielding avatar Mar 13 '25 18:03 fielding

My own concerns with this:

  • Performance Overhead: Timeouts and health checks add slight overhead. For high-throughput applications, this could be noticeable. Consider profiling under load. This is my main concern.

  • Testing Gaps: The bug may/may not be specific to specific scenarios that I mentioned in the description, thus it becomes difficult to test and assess easily.

fielding avatar Mar 13 '25 20:03 fielding

So far testing shows a positive change in behavior over the last 5 days. Normally it would fail to reconnect somewhere around the 36-48hr mark prior to these changes

fielding avatar Mar 18 '25 18:03 fielding

Just some feedback on this.

We've ran with the changes committed in this pull request for a month now as we were experiencing the same kind of reliability issues where connection was not reestablished randomly without any notice of it happening.

Since we introduced these changes we've not had any reliability issues and that is with a NATS upgrade along with multiple restarts of our NATS servers.

MattiasAng avatar May 22 '25 10:05 MattiasAng

@MattiasAng Thanks so much for the feedback. This is great news! To be completely honest, we've been using these changes in the wild as well and it has fully cleared up any issues we were having. To the point that I kind of forgot about this PR lol.

Your feedback + my experience with these changes gives me the reassurance I need to get this out of draft mode and in front of the team. Thank you

fielding avatar May 24 '25 14:05 fielding

Would be great to have these, as I'm also facing some connection reliability issues.

Toruitas avatar Jun 25 '25 14:06 Toruitas

This would be good to be upstreamed. Facing a lot of these issues! @fielding have you rebased this in a bit?

swelborn avatar Sep 30 '25 14:09 swelborn

2. Enhanced Read Loop Stability

  • Implements timeout detection for read operations, introducing a consecutive timeout counter to identify potentially stalled connections.

Same. We sometimes see the client go into an infinite loop around uvloop on connection timeouts.

pisymbol avatar Sep 30 '25 20:09 pisymbol

We're facing these issues as well in production, this PR would be very much welcomed here too! Did you get any traction from the maintainers?

cc @wallyqs @caspervonb

superlevure avatar Oct 10 '25 12:10 superlevure

Hey guys, sorry just now catching up on this. I'm no longer working on the project that I originally was using this on, but as far as I know they continued to use this branch in production for awhile. I see there is def. some interest in getting this to a point where we can try and get it merged in, so I will help as much as possible. It looks like @superlevure is already on it some =)

it looks like a rebase is in order if nothing else =)

fielding avatar Oct 11 '25 15:10 fielding

This now includes the fix for the race condition that @superlevure caught, thanks!

fielding avatar Oct 11 '25 16:10 fielding

Looks promising, will do some testing locally with some chaos sprinkled in.

caspervonb avatar Oct 14 '25 16:10 caspervonb