opentelemetry-collector
opentelemetry-collector copied to clipboard
[telemetry] Add OTEL_RESOURCE_ATTRIBUTES to otlp-emitted metrics
Describe the bug
When enabling the telemetry.useOtelWithSDKConfigurationForInternalTelemetry feature gate and configuring an exporter to send telemetry using OTLP rather than Prometheus, it doesn't seem to include the resource attributes set by the OTEL_RESOURCE_ATTRIBUTES environment variable.
Steps to reproduce
Using these in the helm chart:
command:
extraArgs:
- '--feature-gates=telemetry.useOtelWithSDKConfigurationForInternalTelemetry'
extraEnvs:
- name: 'OTEL_RESOURCE_ATTRIBUTES'
value: 'pod_ip=$(MY_POD_IP)'
It renders to a container spec that looks correct:
...
containers:
- args:
- --config=/conf/relay.yaml
- --feature-gates=telemetry.useOtelWithSDKConfigurationForInternalTelemetry
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: OTEL_RESOURCE_ATTRIBUTES
value: pod_ip=$(MY_POD_IP)
image: otel/opentelemetry-collector-contrib:0.107.0
...
Emitted metrics do not seem to have the resource attribute added.
What did you expect to see?
The metrics that show up at the OTLP target have a resource attribute called pod_ip
What did you see instead?
The metrics that show up at the OTLP target have no such resource attribute.
What version did you use?
contrib 0.107.0
What config did you use?
mode: deployment
replicaCount: 1
presets:
clusterMetrics:
enabled: true
kubernetesEvents:
enabled: true
image:
repository: "otel/opentelemetry-collector-contrib"
command:
extraArgs:
- '--feature-gates=telemetry.useOtelWithSDKConfigurationForInternalTelemetry'
extraEnvs:
- name: 'OTEL_RESOURCE_ATTRIBUTES'
value: 'pod_ip=$(MY_POD_IP)'
config:
exporters:
... redacted
service:
pipelines:
metrics:
receivers: [ k8s_cluster ]
exporters: [ otlp/k8s-clusterMetrics]
metrics/otelcol:
receivers: [ prometheus ]
exporters: [ otlp/otelcolMetrics]
logs:
processors: [ memory_limiter, transform/events, filter/k8s-events, batch ]
exporters: [ otlp/kubernetesEvents]
telemetry:
metrics:
readers:
- periodic:
interval: 60000
exporter:
otlp:
protocol: http/protobuf
endpoint: "https://api.honeycomb.io:443/v1/metrics"
headers:
"X-Honeycomb-Team": "REDACTED"
"X-Honeycomb-Dataset": "otel-collector-otlp-metrics"
Environment
Kubernetes is Amazon EKS: v1.30.2-eks-1552ad0
Additional context
Rigged up docker compose also to be clearer about the output. This is the resource section in the console output.
otel-col | "Resource": [
otel-col | {
otel-col | "Key": "service.instance.id",
otel-col | "Value": {
otel-col | "Type": "STRING",
otel-col | "Value": "c0679388-2ab5-4339-a07d-47e91fd33e36"
otel-col | }
otel-col | },
otel-col | {
otel-col | "Key": "service.name",
otel-col | "Value": {
otel-col | "Type": "STRING",
otel-col | "Value": "otelcol-k8s"
otel-col | }
otel-col | },
otel-col | {
otel-col | "Key": "service.version",
otel-col | "Value": {
otel-col | "Type": "STRING",
otel-col | "Value": "0.107.0"
otel-col | }
otel-col | }
otel-col | ],
otel-col | "ScopeMetrics": [ ... ]
I also tried overriding the service name and it didn't have any impact either.
services:
otelcol:
image: otel/opentelemetry-collector-k8s:0.107.0
container_name: otel-col
environment:
- OTEL_RESOURCE_ATTRIBUTES=pod_ip=docker-compose
- OTEL_SERVICE_NAME=override_env
command:
[
"--config=/etc/otelcol-config.yml",
"--feature-gates=telemetry.useOtelWithSDKConfigurationForInternalTelemetry",
]
volumes:
- ./otlptelemetry-collector-config.yaml:/etc/otelcol-config.yml
ports:
- "4317" # OTLP over gRPC receiver
I agree we should support this.
@mterhar for a work around, what happens if you do:
service:
telemetry:
resource:
pod_ip: "my favorite ip"
That does work for a work-around but the helm chart doesn't expose a pod-name within the container which makes alerts a bit tough to map back to reality.
For now you can use env vars and downard API. I believe something like this will work:
mode: deployment
image:
repository: otel/opentelemetry-collector-k8s
extraEnvs:
- name: K8s_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
config:
service:
telemetry:
resource:
k8s.pod.name: ${env:K8s_POD_NAME}
I can give it a try if @codeboten doesn't mind :)
@iblancasa sure go for it!
@iblancasa Did you get around to working on this?
Removing from 1.0 project since this has a workaround and we consider this an enhancement