opentelemetry-python icon indicating copy to clipboard operation
opentelemetry-python copied to clipboard

Segmentation fault with auto instrumentation in asgi applications

Open potatochip opened this issue 4 years ago • 15 comments

Describe your environment I am experiencing a segmentation fault when using auto instrumentation with any asgi application. I have tested with both starlette and fastapi.

Steps to reproduce Can be created with the following repo. Steps in README.md. https://github.com/potatochip/opentelemetry-segfault-example

What is the expected behavior? No segfault

What is the actual behavior? Segfault

Additional context Load testing sometimes needs to happen a couple times before a segfault occurs, but usually happens on the first try.

potatochip avatar Aug 10 '21 19:08 potatochip

Thanks for the report and reproducible example. Could you please also attach a traceback of the failure?

owais avatar Aug 10 '21 20:08 owais

The container exits with 139. These are the logs

[2021-08-10 20:59:12 +0000] [8] [INFO] Starting gunicorn 20.1.0
[2021-08-10 20:59:12 +0000] [8] [INFO] Listening at: http://0.0.0.0:8000 (8)
[2021-08-10 20:59:12 +0000] [8] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2021-08-10 20:59:12 +0000] [14] [INFO] Booting worker with pid: 14
Overriding of current TracerProvider is not allowed
[2021-08-10 20:59:12 +0000] [14] [INFO] Started server process [14]
[2021-08-10 20:59:12 +0000] [14] [INFO] Waiting for application startup.
[2021-08-10 20:59:12 +0000] [14] [INFO] Application startup complete.
Queue is full, likely spans will be dropped.
Queue is full, likely spans will be dropped.
scripts/auto: line 2:     8 Segmentation fault      opentelemetry-instrument gunicorn -c gunicorn_conf.py auto_main:app

potatochip avatar Aug 10 '21 21:08 potatochip

Immediate difference that I see is that the message "Queue is full, likely spans will be dropped." occurs twice with automatic instrumentation, but only once with manual instrumentation.

potatochip avatar Aug 10 '21 21:08 potatochip

Does this only when auto instrumenting and not with manual setup? From the logs, it does look like that the tracing pipeline is set up twice. Not sure if that is the issue though. Can you please try setting OTEL_BSP_MAX_QUEUE_SIZE to a very large value (it's 2048 by default) and see if that makes the segfault go away or take a longer time to happen? Alternatively, you could set it to a very low number such as 10 and see if it happens almost immediately. It could help us narrow down the issue a bit.

owais avatar Aug 10 '21 22:08 owais

This only happens with auto instrumentation, not manual. I set OTEL_BSP_MAX_QUEUE_SIZE=999999 and segfault still occurs. No change in the amount of time it takes. The only difference is that the queue full message does not appear before the segfault occurs.

potatochip avatar Aug 11 '21 16:08 potatochip

Thanks. That means the queue is not the issue. My initial naive guess is that somehow we are instrumenting things multiple times and that is causing weird memory access issues. Ill try to reproduce with you example and debug later this week.

owais avatar Aug 11 '21 18:08 owais

I tried to run this on my machine. I could recreate the issue. Here is the stack trace with more info

srikanthc@FL-LPT-388s-MacBook-Pro opentelemetry-segfault-example % docker run -it -p 8000:8000 --entrypoint scripts/auto otseg
[2021-08-12 01:21:14 +0000] [7] [INFO] Starting gunicorn 20.1.0
[2021-08-12 01:21:14 +0000] [7] [INFO] Listening at: http://0.0.0.0:8000 (7)
[2021-08-12 01:21:14 +0000] [7] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2021-08-12 01:21:14 +0000] [13] [INFO] Booting worker with pid: 13
WARNING:opentelemetry.trace:Overriding of current TracerProvider is not allowed
[2021-08-12 01:21:14 +0000] [13] [INFO] Started server process [13]
[2021-08-12 01:21:14 +0000] [13] [INFO] Waiting for application startup.
[2021-08-12 01:21:14 +0000] [13] [INFO] Application startup complete.
Fatal Python error: Segmentation fault

Thread 0x00007fc5877fe700 (most recent call first):
  File "/usr/local/lib/python3.8/threading.py", line 306 in wait
  File "/usr/local/lib/python3.8/site-packages/opentelemetry/sdk/trace/export/__init__.py", line 230 in worker
  File "/usr/local/lib/python3.8/threading.py", line 870 in run
  File "/usr/local/lib/python3.8/threading.py", line 932 in _bootstrap_inner
  File "/usr/local/lib/python3.8/threading.py", line 890 in _bootstrap

Thread 0x00007fc58f627740 (most recent call first):
  File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 357 in sleep
  File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 209 in run
  File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 72 in run
  File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 231 in run
  File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 67 in run
  File "/usr/local/bin/gunicorn", line 8 in <module>
scripts/auto: line 2:     7 Segmentation fault      opentelemetry-instrument gunicorn -c gunicorn_conf.py auto_main:app

srikanthccv avatar Aug 12 '21 01:08 srikanthccv

opentelemetry-instrument gunicorn -c gunicorn_conf.py auto_main:app

gunicorn_config.py already sets up the tracing pipeline and I see opentelemtry-distro is installed as well meaning the instrument command would definitely setup two pipeline. We should update docs to not recommend using the instrument command with gunicorn but instead enable the instrumentations in gunicorn_config.py. I think that should solve this specific issue.

That said, we still want to support setting up multiple pipelines when a user really wants it so we should figure out if setting up multiple pipelines is actually what causes the issue and fix it.

owais avatar Aug 12 '21 11:08 owais

Interestingly I am unable to reproduce this issue if I don't use docker.

srikanthccv avatar Sep 03 '21 13:09 srikanthccv

I'm experiencing a similar error in production:

Dec 14 05:57:28 PM  Fatal Python error: Segmentation fault
Dec 14 05:57:28 PM
Dec 14 05:57:28 PM  Thread 0x00007f1e0332f700 (most recent call first):
Dec 14 05:57:28 PM    File "/opt/render/project/python/Python-3.9.15/lib/python3.9/threading.py", line 316 in wait
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/opentelemetry/sdk/trace/export/__init__.py", line 254 in worker
Dec 14 05:57:28 PM    File "/opt/render/project/python/Python-3.9.15/lib/python3.9/threading.py", line 917 in run
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/sentry_sdk/integrations/threading.py", line 67 in run
Dec 14 05:57:28 PM    File "/opt/render/project/python/Python-3.9.15/lib/python3.9/threading.py", line 980 in _bootstrap_inner
Dec 14 05:57:28 PM    File "/opt/render/project/python/Python-3.9.15/lib/python3.9/threading.py", line 937 in _bootstrap
Dec 14 05:57:28 PM
Dec 14 05:57:28 PM  Thread 0x00007f1e0e3e6740 (most recent call first):
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/gunicorn/arbiter.py", line 357 in sleep
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/gunicorn/arbiter.py", line 209 in run
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/gunicorn/app/base.py", line 72 in run
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/gunicorn/app/base.py", line 231 in run
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/lib/python3.9/site-packages/gunicorn/app/wsgiapp.py", line 67 in run
Dec 14 05:57:28 PM    File "/opt/render/project/src/.venv/bin/gunicorn", line 8 in <module>

I have seen this quite often during the day, though at irregular intervals. We do have a quite complex instrumentation setup that's entirely manual.

Versions:

  • Python 3.9.15
  • gunicorn 20.1.0 with default settings (so 1 worker process and 1 thread)

And these are the OpenTelemetry packages we use directly:

opentelemetry-api                        1.7.1
opentelemetry-exporter-otlp-proto-grpc   1.7.1
opentelemetry-instrumentation-botocore   0.26b1
opentelemetry-instrumentation-celery     0.26b1
opentelemetry-instrumentation-flask      0.26b1
opentelemetry-instrumentation-redis      0.26b1
opentelemetry-instrumentation-requests   0.26b1
opentelemetry-instrumentation-sqlalchemy 0.26b1
opentelemetry-sdk                        1.7.1

I checked the changelog for the newer versions of the library but I haven't seen anything related.

Any ideas?

rhymes avatar Dec 14 '22 17:12 rhymes

There wasn't an exact fix, but there were many related fixes that happened. Can you check if this still happens on 1.15.0 and 0.36b0?

srikanthccv avatar Dec 18 '22 06:12 srikanthccv

@srikanthccv thanks for the tip! I will attempt to upgrade and report back :-) It'll take a bit as it's not a smooth upgrade

rhymes avatar Dec 19 '22 15:12 rhymes

Hi @srikanthccv,

unfortunately it didn't work. It might be due to OpenTelemetry's interaction with Sentry SDK within the Flask app, I've also tried to invert the order of the initialization of the two but nothing gained. I'll try using a CPU and memory profiler to see if I can get to the bottom of this.

Thanks in the meantime!

rhymes avatar Dec 21 '22 14:12 rhymes

I think I have managed to hit same (or very similar)

Thread 0x0000704b0f63f6c0 (most recent call first):
  File "/home/dima/.local/share/uv/python/cpython-3.9.22-linux-x86_64-gnu/lib/python3.9/threading.py", line 316 in wait
  File "/code/operator/.tox/unit/lib/python3.9/site-packages/opentelemetry/sdk/trace/export/__init__.py", line 258 in worker
  File "/home/dima/.local/share/uv/python/cpython-3.9.22-linux-x86_64-gnu/lib/python3.9/threading.py", line 917 in run

Full trace: https://gist.github.com/dimaqq/c0ab0cae051a2177c28ec4a560dff183

My case is similar to OP, because the unit tests are distributed by pytest-xdist, meaning that worker processes are spawned, similar to gunicorn.

dimaqq avatar May 30 '25 02:05 dimaqq

Given that the issue occurred in older pythons that don't accept bug fixes any more, and could not be reproduced in newer pythons, I recommend that this issue be closed.

dimaqq avatar May 30 '25 06:05 dimaqq

Python 3.9 is EOL. Let's close this.

dimaqq avatar Nov 12 '25 00:11 dimaqq