Sentry background worker is chronically blocking async event loop when many exceptions are raised
How do you use Sentry?
Self-hosted/on-premise
Version
1.40.6
Steps to Reproduce
Hello! And thanks for reading my ticket :)
The python sentry client is a synchronous client library that is retrofitted to fit the async model (by spinning off separate threads to avoid disrupting the event loop thread -- see background worker (1) for thread usage).
Under healthy conditions, the sentry client doesn’t need to make many web requests. However, if conditions become rocky and exceptions are frequently raised (caught or uncaught), the sentry client may become an extreme inhibitor to the app event loop (assuming high sample rate). This is due to the necessary OS thread context switching that effectively pauses/blocks the event loop to work on other threads (i.e the background worker (1)). This is not a recommended pattern (obviously) due to the costs of switching threads, but can be useful for quickly/lazily retrofitting sync code.
Relevant flow - in short: Every time an exception is raised (caught or uncaught) in my code, a web request is immediately made to dump the data to sentry when sampled. Since sentry’s background worker is thread based (1), this will trigger an thread context switch and then a synchronous web request to dump the data to sentry. When applications receive many exceptions in a short period of time, this becomes a context switching nightmare.
Suggestion: In an ideal world, sentry would asyncify its Background worker to use a task (1) and its transport layer (2) would use aiohttp. I don't think this is of super high complexity, but I could be wrong.
An immediate workaround could be made with more background worker control. If sentry’s background worker made web requests to dump data at configurable intervals, it would behave far more efficiently for event loops apps. At the moment, the background worker always dumps data immediately with regards to exceptions. In my opinion, if sentry is flushing data at app exit, having a 60 second timer to dump data would alleviate most of the symptoms I described above without ever losing data (albeit it would be up to 60 seconds slower).
(1) - https://github.com/getsentry/sentry-python/blob/1b0e932c3f827c681cdd20abfee9afc55e5d141c/sentry_sdk/worker.py#L20
(2) - https://github.com/getsentry/sentry-python/blob/1b0e932c3f827c681cdd20abfee9afc55e5d141c/sentry_sdk/transport.py#L244
Expected Result
I expect to have less thread context switching when using sentry.
Actual Result
I see a lot of thread context switching when there are high exception rates.
Hey @cpvandehey ! Thanks for the great ticket!
Hey @cpvandehey, thanks for writing in. Definitely agree with you that our async support could definitely use some improvements (see e.g. https://github.com/getsentry/sentry-python/issues/1735, https://github.com/getsentry/sentry-python/issues/2007, https://github.com/getsentry/sentry-python/issues/2184, and multiple other issues).
Using an aiohttp client and an asyncio task both sounds doable and would go a long way in making the SDK more async friendly.
We could detect if aiohttp is in the project and based on this enable the new async support automatically. (have not thought long about this if this could lead to problems though..)
Hey @sentrivana / @antonpirker , any update on the progress for this? Happy to help
Hey @cpvandehey, no news on this, but PRs are always welcome if you feel like giving this a shot.
I see the milestone for this task was removed. @antonpirker, should we still consider writing our own attempt?
Hey @cpvandehey, sorry for the confusion regarding the milestone. Previously we were (mis)using milestones to group issues together, but have now decided to abandon that system. Nothing has changed priority wise.
Alright, I think im going to start to implement this. Stay tuned.
Coming up for air after a few hours invested/tinkering. I realized a few things that I should discuss before proceeding:
BackgroundWorkeris the most foundational class needing an async equivalent. Making this async friendly was fairly easy and I have this code ready -- naming itBackgroundWorkerTaskfor the time being. This will rely on the built inasyncio.Queueinstead of thesentry_sdk._queue. I also threw away all the unnecessary locking logic since this is single threaded for async use cases.- Next level up the stack is the
HttpTransportclass. This layer references theBackgroundWorkerobject & stores it as an attribute called_worker. This, for the most part, is fairly straightforward to add an async equivalent. There are only 4 methods that access the_worker, so it makes it easy to create aBaseHttpTransportthat has all common functionality and have two child classes i.e.HttpTransportandHttpTransportAsyncthat inherit that. Each of the children will have specific methods that interact with the aforementioned worker in async/sync fashion. - The next level up is when the complexity rises.. The
Clientclass makes a call tomake_transportand then holds a reference to it asself.transport.self.transportis used to make all the underlying requests. Although this code is dense, its fairly understandable & could be split into a parent class (BaseClient) for common functionality and 2 child classes (ClientandAsyncClient) for some unique transport actions that would need to be awaited. Separately, but as important, is a class namedMonitorthat uses threads to check in on a runningBackgroundWorker. There would be a need forMonitorAsyncas well to check the health ofBackgroundWorkerTask. - Lastly,
ScopeandHubboth seem to be the top level for configuringClient.Hubhas a direct reference to theflushmethod that will use the client'sflushmethod (needing to be an async method).Scopedoes not have any direct way toflush, but only adds to the queue. I certainly could use some pointers on how we would configure 2 separate clients here
Exhales
Like most async integrations, they seem easy at the surface, but end up touching a lot of the code. Im wondering if I am on the right track with what the python sentry folks want for this design. I would love for this to be collaborative and iterative. Let me know your thoughts on the approach above :)
Hey @cpvandehey !
Thanks for this great issue and your motivation! You are right, our async games is currently not the best, and we should, and will improve on it.
To your deep dive:
- Imo everything your write about
BackgroundWorkerandHttpTransportmakes sense. - The
Clientand theScopewill probably be a bit trickier, as you also noticed. - The
Hubis deprecated and we will remove in the next major, so will not touch it ever again :-)
Currently we are in the middle of doing a big refactor, where we try to use OpenTelementry (OTel) under the hood for our performance instrumentation.
We should not do the OTel and the async refactoring at the same time, this will lead to lots of complexity and head aches.
So I proposal is, that we first finish the OTel refactor and then tackle the async refactor. The Otel refactor will probably still take a couple of months (like 2-3, not 10-12). Do you think you can wait a while until we get started with this?
As this is a huge task we should then create a milestone and split the task up in smaller chunks, that can be tackled by multiple people at the same time.
Do you think you can wait a while until we get started with this?
yes
As this is a huge task we should then create a milestone and split the task up in smaller chunks, that can be tackled by multiple people at the same time.
sounds good!
Hey Sentry folks.
Currently we are in the middle of doing a big refactor, where we try to use OpenTelementry (OTel) under the hood for our performance instrumentation.
Just bumping this ticket again. I assume the repo is in a better state to start this effort?
Hi @cpvandehey, thanks for the bump! I do agree the repo is in a better shape. Moreover I've started working on an experimental HTTP/2 transport using httpcore and looks like it has native async support for that part.
Since I'm lending some of my time to the Python SDK nowadays and working in a similar area, I think we can work together on the async support too.
I don't think I'm as well-versed as you are when it comes to async game in Python so I can use some of the code you say you've already written such as the async background worker (I wonder if we actually need it with async as all we need is a queue and the event loops should handle the worker logic, right?).
Anyway, this is hopefully coming and your involvement is much appreciated!
Hey @BYK , I see your code was merged to add in httpcore (1)
Does that mean this would be a good time to start the work on the above? Would love to help in any way that I can
(1) - https://github.com/getsentry/sentry-python/pull/3588
@cpvandehey yeah! Sorry this side stalled a bit, juggling too many things at once 😅
Now that we have httpcore with HTTP/2 support, I think we can try to re-imagine the worker thread-based transport as a completely asyncio-based one. I'd say it should be a new, experimental transport.
Would love to help in any way that I can
If you have any time, trying to take a jab at this new transport would be nice. Second best would be to try this new transport once it lands and see if it actually solves your issues and is stable.
@cpvandehey
Could you give us an update on your plan? Have you started yet? Do you need support? Thanks for your efforts taken so far! :)
Hi @oktavlachs -- apologies for the radio silence. I'm planning to dive into this issue in the coming 1-2 weeks. I'm guessing the changes will require a major version bump which is also cooking.
No work has started yet so if you wanna jump in or provide any directions on where we should go, I'd definitely take those into account.
Just a quick update on this: In SDK 3.x (comming soonish) will have a async transport. So this will improve sending data to Sentry in your async python project considerably.