Multiprocessing integration
We do support fork() in general, but the multiprocessing module has some quirks.
atexitdoes not get called, so events get lost- Unhandled exceptions are not captured
An integration should fix both. We should also consider Billard, a fork of mulitprocessing here. If we manage to properly hook into both versions, we can catch some startup errors in Celery as well.
If you're interested in this, please give this post a 👍 so it's easier to track.
@rouge8 @crepererum can you clarify whether you want this because of celery or something else? After creating this issue I discovered that Celery issues need to be tackled separately.
Dask is able to use local processes as a local backend, and it would be really helpful to at least get the queues flushed properly (atexit). But maybe a direct dask (and even distributed) integration would be more helpful.
@crepererum could you open a new issue wrt Dask support? The more I think about this (multiprocessing) support the more I thikn it's an A/B problem.
can you clarify whether you want this because of celery or something else?
I was reviewing the open issues while migrating away from the old raven library. :) We have some code that uses multiprocessing, but I haven't compared behavior with raven and sentry_sdk or anything yet.
A change in Apache Airflow (to use os.fork, and thus the recommended os._exit() has broken that integration.
It would be nice to have a way to explicitly call the Sentry exit handler. https://issues.apache.org/jira/browse/AIRFLOW-6569 /cc @tiopi
This issue has gone three weeks without activity. In another week, I will close it.
But! If you comment or otherwise update it, I will reset the clock, and if you label it Status: Backlog or Status: In Progress, I will leave it alone ... forever!
"A weed is but an unloved flower." ― Ella Wheeler Wilcox 🥀
are there any workarounds for the lack of unhandled exception capture?
All I can think of is a decorator, ie
from functools import wraps
def capture_exceptions(func):
@wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception as e:
sentry_sdk.capture_exception(e)
raise
return wrapper
@capture_exception
def my_worker_function:
sentry_sdk.init(dsn=f"https://{sentry_key}@{sentry_host}.ingest.sentry.io/{sentry_project_number}")
a = 1/0
Downside is that all exceptions are reported as handled.
Hey @davegravy
This looks solid. For the problem with "handled" you could something like we do in our integrations to have a custom _capture_exception method that sets the handled flag to false. See here: https://github.com/getsentry/sentry-python/blob/sentry-sdk-2.0/sentry_sdk/integrations/asyncio.py#L79-L95