opentelemetry-go-instrumentation
opentelemetry-go-instrumentation copied to clipboard
Can't get container.id from auto instrumentation
Describe the bug
I'm using auto instrumentation in an minikube cluster running 1 pod (3 replicas), each pod has 2 containers:
- simple go http server
- auto instrumentation container
Traces are working fine except for container.id (I need it for infra correlation in the platform that I'm using (Cisco Cloud Observability))
I'm trying to use OTEL_RESOURCE_ATTRIBUTES to get the container.id through a script recalling /proc/self/mountinfo but I can't get the id. Running the command inside the container shell works fine.
this is my deployment YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
namespace: ivango
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
shareProcessNamespace: true
containers:
- name: myapp
image: myapp:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
- name: autoinstrumentation-go
image: otel/autoinstrumentation-go
imagePullPolicy: IfNotPresent
env:
- name: OTEL_GO_AUTO_TARGET_EXE
value: /app/myapp
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://10.108.129.85:4318"
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=GO-Service-Ivan,service.namespace=ivango,k8s.namespace.name=ivango,container.id=$(cat /proc/self/mountinfo | grep -m1 -oE 'docker/containers/([a-f0-9]+)/' | xargs basename)
securityContext:
runAsUser: 0
privileged: true
The result is that I can see correctly all the traces, but instead of the container.id I get the script code itself:
This is my DockerFile:
FROM golang:1.16 as builder
WORKDIR /app
COPY . .
RUN go mod init example.com/myapp
RUN CGO_ENABLED=0 GOOS=linux go build -o myapp
FROM alpine:3.14
WORKDIR /app
COPY --from=builder /app/myapp /app/
EXPOSE 8080
CMD ["/app/myapp"]
And this is my main.go
package main
import (
"fmt"
"net/http"
)
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprint(w, "Hello, Kubernetes!")
})
http.ListenAndServe(":8080", nil)
}
Environment
- OS: Amazon Linux
- Go Version: 1.16
- Version: v0.12.0-alpha
To Reproduce
Steps to reproduce the behavior:
- Build a docker image of main.go
- Create a minikube cluster
- Apply the deployment.yaml
- See on the otel platform the script instead of the container.id
Expected behavior
Seeing the container.id coming from the auto instrumented golang application container
Is using an OTel Collector on the same pod an acceptable solution? You may also be interested in https://github.com/open-telemetry/opentelemetry-operator.
@pellared in my scenario is not a suitable solution, I'm leveraging an automated mechanism in my platform where the collector is running on a separate pod, we would like to keep it as much automated as possible
I was wondering if there is some tiny mistake on the script definition in OTEL_RESOURCE_ATTRIBUTES that prevents the container.id to be reached and gives me the script itself instead
we would like to keep it as much automated as possible
Why would usage of https://github.com/open-telemetry/opentelemetry-operator be not (or less) automated?
Would it be possible to add some of the existing resource detectors into the auto instrumentation, e.g. some of this code:
https://github.com/open-telemetry/opentelemetry.io/blob/05a7dfa420911704c8a234a1783555e431b5ef50/content/en/docs/languages/go/resources.md?plain=1#L39
being added here:
https://github.com/open-telemetry/opentelemetry-go-instrumentation/blob/5acdf20923df5f38e6efd745e4329e0a2db48250/instrumentation.go#L232