opentelemetry-rust icon indicating copy to clipboard operation
opentelemetry-rust copied to clipboard

[Feature]: `opentelemetry-appender-tracing` should respect information inside `tracing::Span`

Open 0xForerunner opened this issue 2 years ago • 33 comments
trafficstars

It's great to be able to correlate logs with traces as is possible with the Grafana Tempo+Loki combo. This could be some sort of configuration option likely.

0xForerunner avatar Nov 16 '23 16:11 0xForerunner

The logs should be correlated automatically.. Are you not seeing that behavior?

cijothomas avatar Nov 16 '23 17:11 cijothomas

Yes - this should be correlated automatically, as long as the application is instrumented using opentelemetry tracing (either through tracing-opentelemetry or directly using opentelemetry tracing api). Not if it is using any other (non-otel) tracing subscriber.

lalitb avatar Nov 16 '23 17:11 lalitb

Screenshot 2023-11-16 at 9 24 20 AM This is the information I'm getting in loki, with no trace_id information.

Logs are being generated with this code

        let logs = opentelemetry_otlp::new_pipeline()
            .logging()
            .with_exporter(opentelemetry_otlp::new_exporter().http())
            .with_log_config(
                opentelemetry_sdk::logs::config().with_resource(Resource::new(vec![
                    KeyValue::new("service.name", "liquidation-bot-rs"),
                ])),
            )
            .install_batch(opentelemetry_sdk::runtime::Tokio)?;

        let logging = OpenTelemetryTracingBridge::new(&logs.provider().unwrap())
            .with_filter(EnvFilter::from_default_env());
        
        subscriber = Registry::default()
            .with(stdout)
            .with(logging)
            .with(paper_trail)
            .with(metrics)
            .with(tracing)

0xForerunner avatar Nov 16 '23 17:11 0xForerunner

Yes - this should be correlated automatically, as long as the application is instrumented using opentelemetry tracing (either through tracing-opentelemetry or directly using opentelemetry tracing api). Not if it is using any other (non-otel) tracing subscriber.

I can confirm things are properly instrumented with tracing-opentelemetry. The log in the screenshot above is inside a function with #[instrument] and I am correctly seeing traces and events.

0xForerunner avatar Nov 16 '23 17:11 0xForerunner

Can you use stdout exporter and share the output? We just want to confirm if this issue is with OTLP or Loki itself.

cijothomas avatar Nov 16 '23 17:11 cijothomas

Can you use stdout exporter and share the output? We just want to confirm if this issue is with OTLP or Loki itself.

Or just the log output from otel collector

cijothomas avatar Nov 16 '23 17:11 cijothomas

Can you use stdout exporter and share the output? We just want to confirm if this issue is with OTLP or Loki itself.

This code somehow results in no output...

        let exporter = opentelemetry_stdout::LogExporter::default();
        let logs = LoggerProvider::builder()
            .with_simple_exporter(exporter)
            .build();
        let logging = OpenTelemetryTracingBridge::new(&logs);

0xForerunner avatar Nov 16 '23 18:11 0xForerunner

collector logs are scarce

otel-collector] [2023-11-16 07:19:57] 2023-11-16T07:19:57.702Z info    [email protected]/telemetry.go:84 Setting up own telemetry...
[otel-collector] [2023-11-16 07:19:57] 2023-11-16T07:19:57.702Z info    [email protected]/telemetry.go:201        Serving Prometheus metrics      {"address": ":8888", "level": "Basic"}
[otel-collector] [2023-11-16 07:19:57] 2023-11-16T07:19:57.703Z info    [email protected]/exporter.go:41     using the new Loki exporter     {"kind": "exporter", "data_type": "logs", "name": "loki"}
[otel-collector] [2023-11-16 07:19:57] 2023-11-16T07:19:57.707Z info    [email protected]/service.go:143  Starting otelcol-contrib...     {"Version": "0.88.0", "NumCPU": 8}
[otel-collector] [2023-11-16 07:19:57] 2023-11-16T07:19:57.708Z info    extensions/extensions.go:33     Starting extensions...
[otel-collector] [2023-11-16 07:19:57] 2023-11-16T07:19:57.708Z info    extensions/extensions.go:36     Extension is starting...        {"kind": "extension", "name": "basicauth/server"}
[otel-collector] [2023-11-16 07:19:57] 2023-11-16T07:19:57.708Z info    extensions/extensions.go:43     Extension started.      {"kind": "extension", "name": "basicauth/server"}
[otel-collector] [2023-11-16 07:19:57] 2023-11-16T07:19:57.708Z info    extensions/extensions.go:36     Extension is starting...        {"kind": "extension", "name": "basicauth/traces"}
[otel-collector] [2023-11-16 07:19:57] 2023-11-16T07:19:57.708Z info    extensions/extensions.go:43     Extension started.      {"kind": "extension", "name": "basicauth/traces"}
[otel-collector] [2023-11-16 07:19:57] 2023-11-16T07:19:57.708Z info    extensions/extensions.go:36     Extension is starting...        {"kind": "extension", "name": "basicauth/metrics"}
[otel-collector] [2023-11-16 07:19:57] 2023-11-16T07:19:57.708Z info    extensions/extensions.go:43     Extension started.      {"kind": "extension", "name": "basicauth/metrics"}
[otel-collector] [2023-11-16 07:19:57] 2023-11-16T07:19:57.708Z info    extensions/extensions.go:36     Extension is starting...        {"kind": "extension", "name": "basicauth/logs"}
[otel-collector] [2023-11-16 07:19:57] 2023-11-16T07:19:57.708Z info    extensions/extensions.go:43     Extension started.      {"kind": "extension", "name": "basicauth/logs"}
[otel-collector] [2023-11-16 07:19:57] 2023-11-16T07:19:57.709Z warn    [email protected]/warning.go:40  Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks        {"kind": "receiver", "name": "otlp", "data_type": "metrics", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
[otel-collector] [2023-11-16 07:19:57] 2023-11-16T07:19:57.709Z info    [email protected]/otlp.go:101        Starting HTTP server    {"kind": "receiver", "name": "otlp", "data_type": "metrics", "endpoint": "0.0.0.0:4318"}
[otel-collector] [2023-11-16 07:19:57] 2023-11-16T07:19:57.710Z info    [email protected]/

0xForerunner avatar Nov 16 '23 18:11 0xForerunner

Follow this example as-is https://github.com/open-telemetry/opentelemetry-rust/blob/main/opentelemetry-otlp/examples/basic-otlp/README.md#manual

You should see logs correctly correlated to traces https://github.com/open-telemetry/opentelemetry-rust/blob/main/opentelemetry-otlp/examples/basic-otlp/src/main.rs#L126

cijothomas avatar Nov 16 '23 18:11 cijothomas

Can you use stdout exporter and share the output? We just want to confirm if this issue is with OTLP or Loki itself.

This code somehow results in no output...

        let exporter = opentelemetry_stdout::LogExporter::default();
        let logs = LoggerProvider::builder()
            .with_simple_exporter(exporter)
            .build();
        let logging = OpenTelemetryTracingBridge::new(&logs);

Do you know what's going on here? Something about that is blocking all output on the tracing subscriber.

0xForerunner avatar Nov 16 '23 18:11 0xForerunner

collector logs are scarce

I should have been more precise, apologies. You need "logging" exporter in collector to see the actual data it receives. https://github.com/open-telemetry/opentelemetry-rust/blob/main/opentelemetry-otlp/examples/basic-otlp/otel-collector-config.yaml#L14

cijothomas avatar Nov 16 '23 18:11 cijothomas

Can you use stdout exporter and share the output? We just want to confirm if this issue is with OTLP or Loki itself.

This code somehow results in no output...

        let exporter = opentelemetry_stdout::LogExporter::default();
        let logs = LoggerProvider::builder()
            .with_simple_exporter(exporter)
            .build();
        let logging = OpenTelemetryTracingBridge::new(&logs);

Do you know what's going on here? Something about that is blocking all output on the tracing subscriber.

Start with the example from the repo. That should work as-is. And you can tweak it to match your setup. https://github.com/open-telemetry/opentelemetry-rust/blob/main/opentelemetry-appender-tracing/examples/basic.rs

cijothomas avatar Nov 16 '23 18:11 cijothomas

debug logs

2023-11-16T18:21:31.684Z        info    [email protected]/telemetry.go:84 Setting up own telemetry...
2023-11-16T18:21:31.685Z        info    [email protected]/telemetry.go:201        Serving Prometheus metrics      {"address": ":8888", "level": "Basic"}
2023-11-16T18:21:31.685Z        info    [email protected]/exporter.go:275        Deprecated component. Will be removed in future releases.       {"kind": "exporter", "data_type": "logs", "name": "logging"}
2023-11-16T18:21:31.685Z        warn    common/factory.go:68    'loglevel' option is deprecated in favor of 'verbosity'. Set 'verbosity' to equivalent value to preserve behavior.      {"kind": "exporter", "data_type": "logs", "name": "logging", "loglevel": "debug", "equivalent verbosity level": "Detailed"}
2023-11-16T18:21:31.685Z        info    [email protected]/service.go:143  Starting otelcol-contrib...     {"Version": "0.88.0", "NumCPU": 10}
2023-11-16T18:21:31.685Z        info    extensions/extensions.go:33     Starting extensions...
2023-11-16T18:21:31.685Z        info    extensions/extensions.go:36     Extension is starting...        {"kind": "extension", "name": "basicauth/logs"}
2023-11-16T18:21:31.685Z        info    extensions/extensions.go:43     Extension started.      {"kind": "extension", "name": "basicauth/logs"}
2023-11-16T18:21:31.685Z        info    extensions/extensions.go:36     Extension is starting...        {"kind": "extension", "name": "basicauth/server"}
2023-11-16T18:21:31.685Z        info    extensions/extensions.go:43     Extension started.      {"kind": "extension", "name": "basicauth/server"}
2023-11-16T18:21:31.685Z        info    extensions/extensions.go:36     Extension is starting...        {"kind": "extension", "name": "basicauth/traces"}
2023-11-16T18:21:31.685Z        info    extensions/extensions.go:43     Extension started.      {"kind": "extension", "name": "basicauth/traces"}
2023-11-16T18:21:31.685Z        info    extensions/extensions.go:36     Extension is starting...        {"kind": "extension", "name": "basicauth/metrics"}
2023-11-16T18:21:31.685Z        info    extensions/extensions.go:43     Extension started.      {"kind": "extension", "name": "basicauth/metrics"}
2023-11-16T18:21:31.686Z        warn    [email protected]/warning.go:40  Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks        {"kind": "receiver", "name": "otlp", "data_type": "metrics", "documentation": "https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"}
2023-11-16T18:21:31.686Z        info    [email protected]/otlp.go:101        Starting HTTP server    {"kind": "receiver", "name": "otlp", "data_type": "metrics", "endpoint": "0.0.0.0:4318"}
2023-11-16T18:21:31.686Z        info    [email protected]/service.go:169  Everything is ready. Begin running and processing data.
2023-11-16T18:21:37.557Z        info    LogsExporter    {"kind": "exporter", "data_type": "logs", "name": "logging", "resource logs": 1, "log records": 1}
2023-11-16T18:21:37.558Z        info    ResourceLog #0
Resource SchemaURL: 
Resource attributes:
     -> service.name: Str(liquidation-bot-rs)
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope opentelemetry-appender-tracing 0.2.0
LogRecord #0
ObservedTimestamp: 2023-11-16 18:21:37.443094 +0000 UTC
Timestamp: 1970-01-01 00:00:00 +0000 UTC
SeverityText: DEBUG
SeverityNumber: Debug(5)
Body: Str(Initializing client with url: wss://testnet.tm.injective.network/websocket)
Trace ID: 
Span ID: 
Flags: 0
        {"kind": "exporter", "data_type": "logs", "name": "logging"}
2023-11-16T18:21:38.573Z        info    LogsExporter    {"kind": "exporter", "data_type": "logs", "name": "logging", "resource logs": 5, "log records": 5}
2023-11-16T18:21:38.574Z        info    ResourceLog #0
Resource SchemaURL: 
Resource attributes:
     -> service.name: Str(liquidation-bot-rs)
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope opentelemetry-appender-tracing 0.2.0
LogRecord #0
ObservedTimestamp: 2023-11-16 18:21:38.154245 +0000 UTC
Timestamp: 1970-01-01 00:00:00 +0000 UTC
SeverityText: TRACE
SeverityNumber: Trace(1)
Body: Str(syncing data)
Trace ID: 
Span ID: 
Flags: 0
ResourceLog #1
Resource SchemaURL: 
Resource attributes:
     -> service.name: Str(liquidation-bot-rs)
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope opentelemetry-appender-tracing 0.2.0
LogRecord #0
ObservedTimestamp: 2023-11-16 18:21:38.261633 +0000 UTC
Timestamp: 1970-01-01 00:00:00 +0000 UTC
SeverityText: TRACE
SeverityNumber: Trace(1)
Body: Str(syncing markets)
Attributes:
     -> height: Str(18494177)
Trace ID: 
Span ID: 
Flags: 0
ResourceLog #2
Resource SchemaURL: 
Resource attributes:
     -> service.name: Str(liquidation-bot-rs)
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope opentelemetry-appender-tracing 0.2.0
LogRecord #0
ObservedTimestamp: 2023-11-16 18:21:38.261741 +0000 UTC
Timestamp: 1970-01-01 00:00:00 +0000 UTC
SeverityText: TRACE
SeverityNumber: Trace(1)
Body: Str(syncing collaterals)
Attributes:
     -> height: Str(18494177)
Trace ID: 
Span ID: 
Flags: 0
ResourceLog #3
Resource SchemaURL: 
Resource attributes:
     -> service.name: Str(liquidation-bot-rs)
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope opentelemetry-appender-tracing 0.2.0
LogRecord #0
ObservedTimestamp: 2023-11-16 18:21:38.261778 +0000 UTC
Timestamp: 1970-01-01 00:00:00 +0000 UTC
SeverityText: TRACE
SeverityNumber: Trace(1)
Body: Str(syncing oracle asset details)
Attributes:
     -> height: Str(18494177)
Trace ID: 
Span ID: 
Flags: 0
ResourceLog #4
Resource SchemaURL: 
Resource attributes:
     -> service.name: Str(liquidation-bot-rs)
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope opentelemetry-appender-tracing 0.2.0
LogRecord #0
ObservedTimestamp: 2023-11-16 18:21:38.370362 +0000 UTC
Timestamp: 1970-01-01 00:00:00 +0000 UTC
SeverityText: TRACE
SeverityNumber: Trace(1)
Body: Str(syncing markets)
Attributes:
     -> height: Str(18494178)
Trace ID: 
Span ID: 
Flags: 0
        {"kind": "exporter", "data_type": "logs", "name": "logging"}

0xForerunner avatar Nov 16 '23 18:11 0xForerunner

Looks like the trace ids are being omitted.

0xForerunner avatar Nov 16 '23 18:11 0xForerunner

Follow this example as-is https://github.com/open-telemetry/opentelemetry-rust/blob/main/opentelemetry-otlp/examples/basic-otlp/README.md#manual

You should see logs correctly correlated to traces https://github.com/open-telemetry/opentelemetry-rust/blob/main/opentelemetry-otlp/examples/basic-otlp/src/main.rs#L126

@ewoolsey Could you start with the above example (which we know is working as expected), and progressively make changes to it to match your setup? That'll help us identify root-cause quickly.

cijothomas avatar Nov 16 '23 18:11 cijothomas

Follow this example as-is https://github.com/open-telemetry/opentelemetry-rust/blob/main/opentelemetry-otlp/examples/basic-otlp/README.md#manual You should see logs correctly correlated to traces https://github.com/open-telemetry/opentelemetry-rust/blob/main/opentelemetry-otlp/examples/basic-otlp/src/main.rs#L126

@ewoolsey Could you start with the above example (which we know is working as expected), and progressively make changes to it to match your setup? That'll help us identify root-cause quickly.

I just did, and that example doesn't emit a trace_id either, even when inside a span.

code

use opentelemetry::KeyValue;
use opentelemetry_appender_tracing::layer;
use opentelemetry_sdk::{
    logs::{Config, LoggerProvider},
    Resource,
};
use tracing::{error, span};
use tracing_core::Level;
use tracing_subscriber::prelude::*;

fn main() {
    let exporter = opentelemetry_stdout::LogExporter::default();
    let provider: LoggerProvider = LoggerProvider::builder()
        .with_config(
            Config::default().with_resource(Resource::new(vec![KeyValue::new(
                "service.name",
                "log-appender-tracing-example",
            )])),
        )
        .with_simple_exporter(exporter)
        .build();
    let layer = layer::OpenTelemetryTracingBridge::new(&provider);
    tracing_subscriber::registry().with(layer).init();
    let my_span = span!(Level::TRACE, "my_span");

    // `my_span` exists but has not been entered.

    // Enter `my_span`...
    let _enter = my_span.enter();
    error!(name: "my-event-name", target: "my-system", event_id = 20, user_name = "otel", user_email = "[email protected]");
    drop(provider);
}

output

{"resourceLogs":[{"resource":{"attributes":[{"key":"service.name","value":{"stringValue":"log-appender-tracing-example"}}]},"scopeLogs":[{"scope":{"name":"opentelemetry-appender-tracing","version":"0.2.0"},"logR
ecords":[{"timeUnixNano":null,"time":null,"observedTimeUnixNano":1700160228053182000,"observedTime":"2023-11-16 18:43:48.053","severityNumber":17,"severityText":"ERROR","attributes":[{"key":"name","value":{"stri
ngValue":"my-event-name"}},{"key":"event_id","value":{"intValue":20}},{"key":"user_name","value":{"stringValue":"otel"}},{"key":"user_email","value":{"stringValue":"[email protected]"}}],"droppedAttributesCo
unt":0}]}]}]}

0xForerunner avatar Nov 16 '23 18:11 0xForerunner

I don't see the tracing-opentelemetry subscriber configured in the above example before emitting spans.

lalitb avatar Nov 16 '23 19:11 lalitb

I don't see the tracing-opentelemetry subscriber configured in the above example before emitting spans.

ah right I just blindly ran that example since you linked it.

Anyhow, I found the root cause.

This does not emit a trace id

#[instrument]
pub async fn my_func()  {
    trace!("fun log");
}

But this does

#[instrument]
pub async fn my_func() {
    let tracer = opentelemetry::global::tracer("ex.com/basic");
    tracer.in_span("cool span", |_| {
        trace!("fun log");
    });
}

Should #[instrument] not be taking care of this for me?

0xForerunner avatar Nov 16 '23 19:11 0xForerunner

Should #[instrument] not be taking care of this for me?

That is a question for tracing-opentelemetry repo! It should have worked, but might need additional setups/config, which are better asked in tracing-opentelemetry repo.

cijothomas avatar Nov 16 '23 19:11 cijothomas

Should #[instrument] not be taking care of this for me?

That is a question for tracing-opentelemetry repo! It should have worked, but might need additional setups/config, which are better asked in tracing-opentelemetry repo.

I can't have been the only person to run into this, seems like a pretty common use case. I suppose I can open up an issue over there.

Thanks so much for your help talking through this! Appreciate your time. I'll mark this issue as closed here.

0xForerunner avatar Nov 16 '23 19:11 0xForerunner

Sorry I think I need to reopen this. I'm pretty sure the issue lies with the opentelemetry-appender-tracing crate. Looking at its code, nowhere does it use the LogRecord trace_context field, which I believe is required to make this feature work correctly, no?

0xForerunner avatar Nov 18 '23 23:11 0xForerunner

opentelemetry-appender-tracing expects the parent's trace-context to be stored in the Otel Context , while tracing-opentelemetry manages the parent-child relationship of spans primarily within the tracing context, and it is not translated in OpenTelemetry Context for logs. Will have a look into this further, though not sure where the changes lies.

lalitb avatar Nov 19 '23 05:11 lalitb

opentelemetry-appender-tracing expects the parent's trace-context to be stored in the Otel Context , while tracing-opentelemetry manages the parent-child relationship of spans primarily within the tracing context, and it is not translated in OpenTelemetry Context for logs. Will have a look into this further, though not sure where the changes lies.

Yeah I think that's an accurate description. I would think that opentelemetry-appender-tracing should be doing the work here but this ecosystem is complicated enough that I won't claim to know how things should work haha.

0xForerunner avatar Nov 19 '23 17:11 0xForerunner

opentelemetry-appender-tracing expects the parent's trace-context to be stored in the Otel Context , while tracing-opentelemetry manages the parent-child relationship of spans primarily within the tracing context, and it is not translated in OpenTelemetry Context for logs. Will have a look into this further, though not sure where the changes lies.

I changed the name of this issue to reflect what I think should be the expected behaviour.

0xForerunner avatar Nov 21 '23 01:11 0xForerunner

@lalitb I need to try and get this going for my work, would you be able to help me out with a quick workaround? Within the on_event trait method inside opentelemetry-appender-tracing, how exactly should I be extracting the trace_id from the tracing ctx?. Here's what I have so far but I can't quite figure out the very last step.

impl<'a, S, P, L> Layer<S> for OpenTelemetryTracingBridge<P, L>
where
    for<'lookup> S: tracing_subscriber::registry::LookupSpan<'lookup>,
    S: tracing::Subscriber,
    P: LoggerProvider<Logger = L> + Send + Sync + 'static,
    L: Logger + Send + Sync + 'static,
{
    fn on_event(&self, event: &tracing::Event<'_>, ctx: tracing_subscriber::layer::Context<'_, S>) {
        let meta = event.metadata();
        let mut log_record: LogRecord = LogRecord::default();
        log_record.severity_number = Some(severity_of_level(meta.level()));
        log_record.severity_text = Some(meta.level().to_string().into());

        if let Some(current_span) = ctx.lookup_current() {
            if let Some(mut otel_data) = current_span.extensions_mut().remove::<OtelData>() {
                log_record.with_context(otel_data.parent_cx);
            }
        }


    ...

0xForerunner avatar Nov 22 '23 20:11 0xForerunner

@lalitb I need to try and get this going for my work, would you be able to help me out with a quick workaround? Within the on_event trait method inside opentelemetry-appender-tracing, how exactly should I be extracting the trace_id from the tracing ctx?. Here's what I have so far but I can't quite figure out the very last step.

impl<'a, S, P, L> Layer<S> for OpenTelemetryTracingBridge<P, L>
where
    for<'lookup> S: tracing_subscriber::registry::LookupSpan<'lookup>,
    S: tracing::Subscriber,
    P: LoggerProvider<Logger = L> + Send + Sync + 'static,
    L: Logger + Send + Sync + 'static,
{
    fn on_event(&self, event: &tracing::Event<'_>, ctx: tracing_subscriber::layer::Context<'_, S>) {
        let meta = event.metadata();
        let mut log_record: LogRecord = LogRecord::default();
        log_record.severity_number = Some(severity_of_level(meta.level()));
        log_record.severity_text = Some(meta.level().to_string().into());

        if let Some(current_span) = ctx.lookup_current() {
            if let Some(mut otel_data) = current_span.extensions_mut().remove::<OtelData>() {
                log_record.with_context(otel_data.parent_cx);
            }
        }


    ...

@ewoolsey with_context is defined for LogRecordBuilder, so won't work here. Did you try directly assigning to log_record.trace_context as done in #1394? I believe it should work.

Also, please remember this will call both OpenTelemetryTracingBridge::on_event() and OpenTelemetryLayer::on_event(), so the logs would be exported both as SpanEvent and LogRecord.

lalitb avatar Nov 22 '23 23:11 lalitb

Oh that's amazing I didn't realize a PR was already being worked on! I'm sure that'll be exactly what I need. Thank you!

0xForerunner avatar Nov 22 '23 23:11 0xForerunner

Has there been any progress on this? Even just a workaround? not been able to find any good examples for implementing and have tried 5 or so different possible solutions to no avail.

ShovelJockey avatar Mar 25 '24 18:03 ShovelJockey

Is there an accepted way to correlate logs & traces when using tracing_subscriber? As a hack I'm using opentelemetry-appender-tracing along with tracing-opentelemetry to capture both traces and logs which is inefficient. Is there a better way?

If I only use tracing-opentelemetry I get the traces ingested but log events can't set the proper level because they're technically ingested as traces.

If anyone has some working code they can share that correlates log events with traces I'd very much appreciate it.

nastynaz avatar Apr 19 '24 08:04 nastynaz

Ping to https://github.com/open-telemetry/opentelemetry-rust/issues/1689

cijothomas avatar Apr 30 '24 02:04 cijothomas