Simpler API for changing the default interval for batch exports in opentelemetry-jaeger
The module docs recommend using a batch exporter for optimal performance, which is as simple as:
let tracer = opentelemetry_jaeger::new_pipeline()
.install_batch(opentelemetry::runtime::Tokio)?;
However, the default batch export interval is 5 seconds which is can be rather long for debugging a system in real-time.
Side note, it is also rather hard to find this default value, it requires inspecting the source code and it is several hops away from the implementation of
install_batch. Understanding the latency that the batching may add is the first question that comes to my mind here, perhaps it should be more explicit in the docs.
It is possible to change the interval, and other related settings, but it is not very ergonomic, is not documented and requires digging through quite a lot of source to figure out.
let exporter = opentelemetry_jaeger::new_pipeline()
.init_async_exporter(opentelemetry::runtime::Tokio)?;
let processor = BatchSpanProcessor::builder(exporter, opentelemetry::runtime::Tokio)
.with_scheduled_delay(Duration::from_millis(500))
.build();
let provider = sdk::trace::TracerProvider::builder()
.with_span_processor(processor)
.build();
let tracer = provider.tracer("opentelemetry-jaeger");
let _ = opentelemetry::global::set_tracer_provider(provider);
It would be nice to expose the ability to change BatchConfig at the top-level of the API in opentelemetry_jaeger::PipelineBuilder.
Related question: install_question uses TracerProvider::tracer_versioned but I didn't want to get into dynamically getting the version of the crate, so I used TracerProvider::tracer instead. Could this be problematic?
Yeah, I agree there is a gap on how to set BatchConfig for the batch span processors. Currently only way to do it is to build your own span processor or use env vars.
We should provide an API like
builder.with_batch_processor_config(BatchConfig::default().with_max_queue_size(200))
.install_batch(Tokio)
for all exporter that can work with batch span processor.
Yeah, I agree there is a gap on how to set
BatchConfigfor the batch span processors. Currently only way to do it is to build your own span processor or use env vars.We should provide an API like
builder.with_batch_processor_config(BatchConfig::default().with_max_queue_size(200)) .install_batch(Tokio)for all exporter that can work with batch span processor.
My honor to take this issue, I plan to add API like
opentelemetry_jaeger::new_collector_pipeline().with_batch_processor_config(BatchConfig::default().with_scheduled_delay(100)).install_batch(tokio)
for all pipeline, is this api form appropriate? if so, could you please assign it to me?
This issue should have been fixed by #869 and released as part of 0.18