opentelemetry-python
opentelemetry-python copied to clipboard
Clients requests with partial set of B3 propagation headers causes crash
Describe your environment Python 3.8 Flask 2.0.3 opentelemetry-propagator-b3 = 1.10.0 opentelemetry-instrumentation-flask = =0.29b0
Steps to reproduce Following https://opentelemetry.io/docs/instrumentation/python/getting-started/#configure-your-http-propagator-b3-baggage, set_global_textmap using the B3MultiFormat() rather than the deprecated B3Format.
set_global_textmap(B3MultiFormat())
Make an API request from Postman (or other client tool of your choice), and set the following headers:
- x-b3-traceid : 8b65d983c41ee5a6
- x-b3-spanid : 8b65d983c41ee5a6
- b3 : 8b65d983c41ee5a6-8b65d983c41ee5a6
See that it works
Make the same request, but this time do not set the b3 header (still set the x-b3-traceid and x-b3-spanid headers)
See that it crashes
Make the same request without any of the headers,
See that it works
What is the expected behavior? Clients providing incomplete set of propagation headers should not result in an unhandled exception being raised to the web app.
What is the actual behavior?
2022-04-20T19:23:47.632614+00:00 [ERROR] [None] [5368] waitress - Exception while serving <path omitted>
Traceback (most recent call last):
File "<path-omitted>\.venv\lib\site-packages\waitress\channel.py", line 426, in service
task.service()
File "<path-omitted>\.venv\lib\site-packages\waitress\task.py", line 168, in service
self.execute()
File "<path-omitted>\.venv\lib\site-packages\waitress\task.py", line 434, in execute
app_iter = self.channel.server.application(environ, start_response)
File "<path-omitted>\.venv\lib\site-packages\paste\translogger.py", line 69, in __call__
return self.application(environ, replacement_start_response)
File "<path-omitted>\.venv\lib\site-packages\flask\app.py", line 2091, in __call__
return self.wsgi_app(environ, start_response)
File "<path-omitted>\.venv\lib\site-packages\werkzeug\middleware\proxy_fix.py", line 187, in __call__
return self.app(environ, start_response)
File "<path-omitted>\.venv\lib\site-packages\opentelemetry\instrumentation\flask\__init__.py", line 170, in _wrapped_app
return wsgi_app(wrapped_app_environ, _start_response)
File "<path-omitted>\.venv\lib\site-packages\flask\app.py", line 2080, in wsgi_app
return response(environ, start_response)
File "<path-omitted>\.venv\lib\site-packages\werkzeug\wrappers\response.py", line 630, in __call__
start_response(status, headers)
File "<path-omitted>\.venv\lib\site-packages\opentelemetry\instrumentation\flask\__init__.py", line 156, in _start_response
if span.kind == trace.SpanKind.SERVER:
AttributeError: 'NonRecordingSpan' object has no attribute 'kind'
Process finished with exit code 0
Additional context Add any other context about the problem here.
Reading https://github.com/openzipkin/b3-propagation further, it doesn't seem like the b3 header should be necessary for B3MultiFormat at all. I would expect this to only be used for B3SingleFormat.
Hi @ewolz3 any luck with this?
There isn't much (any) documentation but I tried to do the same and encountered issues. GCP has recommended OpenTelemetry as the tracing approach but we need b3 header propagation to merge custom traces with Istio/Anthos service mesh traces. I am curious if anybody has done this yet or if this is new territory, examples are very hard to come by.
@ColeSiegelTR - It's not been a major blocking issue in my environment (all 3 headers are getting set by Cloud Foundry), but it was unexpected behavior I encountered in unit testing that I wanted to raise on behalf of the community.
@ewolz3 Is this still happening after this?