opentelemetry-rust
opentelemetry-rust copied to clipboard
Appender-tracing and target
OpenTelemetry-Appender-Tracing is currently ignoring the target from the Metadata. It is stored as an attribute "log.target" when using experimental feature flag.
target from tracing is what OTel calls instrumentation scope's name. Hence, the target should be stored as instrumentation_scope.name.
Currently, the instrumentation_scope is hardcoded to be "opentelemetry-appender-tracing", which makes it less useful, as every LogRecord will have the same information, defeating the purpose of scope/target in OpenTelemetry.
Suggested fixes:
- Modify the appender to use a different Logger for each
targetit encounters. This would likely kill perf/throughput due to the need of a read or read+insert to a lock-protected-hashmap in the hotpath. This could be mitigated via thread-local hashmap, so the contention can be solved at the expense of more memory. (IIUC, OTel Java uses this approach, but not sure if they use thread-local optimization or not) - Store
targetas a special attribute_tracing.targetin the LogRecord. In the OTLP Exporter thread, specially treat "_tracing.target" and use that to construct instrumentation_scope_name. (and then do not export this special attribute). OTel .NET does something like this to avoid performance bottleneck: https://github.com/open-telemetry/opentelemetry-dotnet/pull/4941
Note that for events, tracing defaults to a target like "event file:line", which does not really fit the OTel semantics for instrumentation_scope.name.
Note that for events,
tracingdefaults to atargetlike"event file:line", which does not really fit the OTel semantics forinstrumentation_scope.name.
I tried the below example, slightly modified from example, and can see that target is coming as "basic", "basic::foo", and "basic::foo::baz" respectively...
//! run with `$ cargo run --example basic
use opentelemetry::KeyValue;
use opentelemetry_appender_tracing::layer;
use opentelemetry_sdk::{
logs::{Config, LoggerProvider},
Resource,
};
use tracing::error;
use tracing_subscriber::prelude::*;
use crate::foo::bar;
fn main() {
let exporter = opentelemetry_stdout::LogExporter::default();
let provider: LoggerProvider = LoggerProvider::builder()
.with_config(
Config::default().with_resource(Resource::new(vec![KeyValue::new(
"service.name",
"log-appender-tracing-example",
)])),
)
.with_simple_exporter(exporter)
.build();
let layer = layer::OpenTelemetryTracingBridge::new(&provider);
tracing_subscriber::registry().with(layer).init();
error!(name: "my-event-name", event_id = 20, user_name = "otel", user_email = "[email protected]");
foo::bar();
foo::baz::qux();
drop(provider);
}
mod foo {
use tracing::error;
pub fn bar() {
error!(name: "my-event-name", event_id = 21, user_name = "otel");
}
pub(crate) mod baz {
use tracing::error;
pub fn qux() {
error!(name: "my-event-name", event_id = 22, user_name = "otel");
}
}
}
@lalitb Missing target is limiting the utility - please reconsider the priority for this item, and move it to beta, if possible.
Had a quick look into the Java implementation - it's uses a ComponentRegistry to maintain loggers/meters/tracers, which internally uses ConcurrentHashMap for storage. Looking further how other languages are doing it, before finalizing on second approach.
I am more inclined towards Option 2.
Option 1 - Using different loggers for each target, means we need to maintain these loggers in some (thread-safe) hashmap, and the insertion/retrieval will result in the lock in the hot path. Or else using thread-local hashmap, mayn't be optimal for scenarios with frequently spawned short-lived threads, or can sometimes also lead to redundant entries and inefficient memory usage.
Option 2 - Looks least disruptive, and efficient. We can also maintain the target as a field in LogRecord, as this would be consistent with the metadata of logs and tracing. Similar to the approach we used for adding event-name. The exporters can then decide how to handle this field - e.g., OTLP exporter can transform it into OTLP instrumentation_scope_name. The downside of this would be the increase in the LogRecord size. Or else, sending target as a special attribute in LogRecord, and if this attribute is present, the processor/emitter would move it to instrumentation_scope.name.
Or else, sending target as a special attribute in LogRecord, and if this attribute is present, the processor/emitter would move it to instrumentation_scope.name.
I'd suggest to keep it as top level field itself. Putting into attributes means one has to iterate through the attributes to find the target.