otelsql
otelsql copied to clipboard
Rely solely on the trace provider to get the tracer
Hi! We've recently been using in our codebase otelsql
to modernise some old SQL tracing code and it's been working well :)
One thing that got in the way is that otelsql
caches the tracer
when it's initialised, which creates for a us a chicken and egg problem: our tracing code requires reading some setting from the database, but setting up the database driver requires tracing, leading otelsql
to use the wrong tracer if we didn't change anything.
The present PR instead relies fully on the otel code to handle caching the tracer, and simply calls the tracing provider whenever it needs one.
I think this class of problem may be common in the context of legacy apps whose init layer is a bit more complicated than usual.
We have ran this code in production for a few weeks now, without observing any issue. Please tell me if you think there is a better way of handling this or if I can modify the code more to your liking.
Codecov Report
Base: 80.2% // Head: 79.6% // Decreases project coverage by -0.6%
:warning:
Coverage data is based on head (
38f4a6e
) compared to base (dd2cb66
). Patch coverage: 54.5% of modified lines in pull request are covered.
Additional details and impacted files
@@ Coverage Diff @@
## main #115 +/- ##
=======================================
- Coverage 80.2% 79.6% -0.6%
=======================================
Files 13 13
Lines 652 658 +6
=======================================
+ Hits 523 524 +1
- Misses 105 110 +5
Partials 24 24
Impacted Files | Coverage Δ | |
---|---|---|
config.go | 75.7% <50.0%> (-13.2%) |
:arrow_down: |
utils.go | 100.0% <100.0%> (ø) |
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
Hi @jhchabran, thanks for submitting this PR!
our tracing code requires reading some setting from the database, but setting up the database driver requires tracing, leading otelsql to use the wrong tracer if we didn't change anything.
I would like to leave end-users a chance to choose cached tracer (more performance) or dynamic tracer provider (more flexible). I wonder whether we can solve this issue by retrieving the TracerProvider
from context.
Like these codes: https://github.com/open-telemetry/opentelemetry-go-contrib/blob/9f3760a3b2683e30b44990ebf5e5fefcb296ecca/instrumentation/net/http/otelhttp/transport.go#L96-L102 https://github.com/open-telemetry/opentelemetry-go-contrib/blob/9f3760a3b2683e30b44990ebf5e5fefcb296ecca/instrumentation/net/http/otelhttp/config.go#L73-L76
I think retrieving the TracerProvider
from context can solve this dependency issue once for all.
Or, maybe adding an option like WithLiveTracer
option to disable tracer cache, but I am not sure about this. No existing otel instrumentation has such an option.
I wonder, is it possible to create a new DB instance after obtaining the tracing setting from the database? If so, then we might not need this PR.
I would like to leave end-users a chance to choose cached tracer (more performance) or dynamic tracer provider (more flexible). I wonder whether we can solve this issue by retrieving the TracerProvider from context.
That makes sense, I'll take a look!
I wonder, is it possible to create a new DB instance after obtaining the tracing setting from the database? If so, then we might not need this PR.
In our case, that would be doable but it would be make that code convoluted. Because my understanding so far is that being able to dynamically get the tracer from the provider is idiomatic, it sounds that it's the best approach, especially if it's possible to toggle it as you mentioned. Please tell me if that's wrong of me to assume so!
hey @XSAM , also note that internally Tracer()
already handles caching of repeat requests:
https://github.com/open-telemetry/opentelemetry-go/blob/5e8eb855bf3c6507715edd12ded5c6a950dd6104/sdk/trace/provider.go#L141-L160
The cache holds a lock, but in the context of a database query the impact should be negligible
That said - @jhchabran upon a closer look at Tracer()
:
If the same name and options are passed multiple times, the same Tracer will be returned (it is up to the implementation if this will be the same underlying instance of that Tracer or not). It is not necessary to call this multiple times with the same name and options to get an up-to-date Tracer. All implementations will ensure any TracerProvider configuration changes are propagated to all provided Tracers.
This might be an implementation error on our end as well - maybe we need to configure a zero-value tracer that can be swapped out after initialization
All implementations will ensure any TracerProvider configuration changes are propagated to all provided Tracers.
I am not sure what kind of dynamic config you need to fetch from the database. But TracerProvider
allows changing the SpanProcessor
on the fly, though not other things like Sampler
. You can register a new SpanProcessor
to the tracer provider after getting a tracer. And this change can affect the existing tracer.
// New tracer provider
tp = sdktrace.NewTracerProvider(
sdktrace.WithSampler(sdktrace.AlwaysSample()),
sdktrace.WithResource(resource.NewWithAttributes(
semconv.SchemaURL,
serviceName,
)),
)
// New tracer
tracer := tp.Tracer(instrumentationName)
// Fetch config from database to init a `SpanProcessor`
var sp SpanProcessor
// Use the new span processor
tp.RegisterSpanProcessor(sdktrace.NewSimpleSpanProcessor(sp))
// And this would affect existing tracers
ctx, span := tracer.Start(context.Background(), "example")
If you only want to change the SpanProcessor
on the fly, then there is no need for this PR.
@jhchabran, I see the related PRs merged. I am closing this PR.
Please feel free to reopen this if you need to.