Profiles rejected because of invalid Timestamp
Hello, I'm encountering an issue when using the eBPF profiler (eb8909ec) with the collector and exporting profiling data to Pyroscope. The collector logs show that profile data is being dropped due to a timestamp that defaults to 1970-01-01, which is outside of Pyroscope's ingestion window. Here's a snippet from the logs:
otel-collector | 2025-10-09T13:09:29.684Z error internal/queue_sender.go:42 Exporting failed. Dropping data.
{
"resource": {
"service.instance.id": "959b3d27-4396-4ebb-9873-1802059cc236",
"service.name": "otelcol-contrib",
"service.version": "0.136.0"
},
"otelcol.component.id": "otlp/pyroscope",
"otelcol.component.kind": "exporter",
"otelcol.signal": "profiles",
"error": "not retryable error: Permanent error: rpc error: code = Unknown desc = failed to make a GRPC request: invalid_argument: profile with labels '{__delta__=\"false\", __name__=\"process_cpu\", __otel__=\"true\", container.id=\"fb399d21161bf5fa41e0b2c7496c799704bd1cadb5424f3711d766032c2604ce\", service_name=\"unknown_service\"}' is outside of ingestion window (profile timestamp: 1970-01-01 00:00:00.397 +0000 UTC, the ingestion window starts at 2025-10-09 12:09:29.683 +0000 UTC and ends at 2025-10-09 13:19:29.683 +0000 UTC)",
"dropped_items": 8
}
Hi @EmilRte Can you please check with Pyroscope which field/timestamp they are using and in which format (nano second or something else) that results in this error?
I run into exact same issue after upgrading my test OTel collector distribution to the latest ebpf-profiler, which bumped OTel dependencies to v0.137.0. It should be fairly easy to figure which exact filed Pyroscope is complaining about.
Ran into this too. I'm speculating that https://github.com/open-telemetry/opentelemetry-collector/pull/13758 broke wire compatibility with pprof, which is expected by Pyroscope during ingest.
Ran into this too. I'm speculating that open-telemetry/opentelemetry-collector#13758 broke wire compatibility with pprof, which is expected by Pyroscope during ingest.
We (Profiling SIG) decided a long time ago not to pursue wire compatibility with pprof, so if that's an assumption downstream makes expect further breakage. Instead, we see OTel profiling as a pprof superset meaning that pprof data should be convertible to OTel profiling format (and back) without loss of information.
@EmilRte In this case, it seems that the Sample timestamp is set to zero (based on the error message). Can you double check the exporter logic and verify that it's processing the right field?
Ran into this too. I'm speculating that open-telemetry/opentelemetry-collector#13758 broke wire compatibility with pprof, which is expected by Pyroscope during ingest.
We (Profiling SIG) decided a long time ago not to pursue wire compatibility with pprof, so if that's an assumption downstream makes expect further breakage. Instead, we see OTel profiling as a pprof superset meaning that pprof data should be convertible to OTel profiling format (and back) without loss of information.
Makes perfect sense--the sources call out the fact that any wire compatibility with pprof is merely coincidental and the move has obvious technical merit. I was glad to see this change.
That said, pprof is the dominant transport format for profiling data in the ecosystem today (despite its inferiority), so this breakage forces all downstream services to support both pprof and pprofile ingest. Does it make sense for the OTel collector to serve as an adapter, down-converting pprofile to pprof on export if configured to do so (apologies if this is planned, I am ignorant).
Does it make sense for the OTel collector to serve as an adapter [..]
https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/42843 is the first step in that direction with converting pprof to OTel profiles. Further steps will follow later on. Contributions to speed up this process, are always welcome.
Closing this issue, as in https://github.com/open-telemetry/opentelemetry-ebpf-profiler/issues/997 user reported the same problem with timestamps and confirmed that an update of the backend resolved the issue.