opentelemetry-demo icon indicating copy to clipboard operation
opentelemetry-demo copied to clipboard

Consider using k8s workload name as the service.name

Open dashpole opened this issue 4 months ago • 3 comments

Feature Request

This may be a question for semantic conventions, but I figured I would start here.

The k8s demo currently uses the app.kubernetes.io/component label as the service.name resource attribute: https://github.com/open-telemetry/opentelemetry-demo/blob/14bc74abf0bd7b192401af91abc60d79f4c60cfa/kubernetes/opentelemetry-demo.yaml#L9727-L9731.

This differs from the behavior of the OpenTelemetry Operator, which uses the workload (deployment, statefulset, etc.) name as the service.name resource attribute

It would be nice if there was a standard way to default the service name in k8s environments which was used in the operator and in the k8s demo.

@open-telemetry/semconv-k8s-approvers

dashpole avatar Feb 28 '24 15:02 dashpole

To confirm I'm reading that code correctly, it uses the K8s Deployment name for the service.name attribute?

puckpuck avatar Mar 01 '24 02:03 puckpuck

There was a similar discussion in this now closed PR on semantic-conventions. We were looking at a different set of standard labels in the PR than the ones being discussed here. We should create a new issue to discuss this in semantic-conventions but this might be a little difficult to standardize on given the number of options.

Re: what should be our recommended way of getting the service.name? On the one hand, the name of the controller (deployment/sts) might not reflect the logical service name. For example, we split a single installation of a database cluster into statefulset per zone to have better operational control in case a zone is down. In this case, the 3 stses belong to the same logical service and our service.name equivalent attr is set to same value (which we get from a label) for pods from the 3 sts. Another more common usecase for when a logical service is likely to run as part of multiple deployments would be when practicing canary/stable deployment pattern. On the other, the standard k8s labels are just recommendations so they might not even be available on workloads which is not the case for controller name. I think the recommendation in sem-conv should be the k8s recommended label (we seem to have 3 options here, but imo app.kubernetes.io/instance is closer to logical service than others) and if not available, the controller name.

jinja2 avatar Mar 01 '24 19:03 jinja2

I wonder if part of the problem here is we have the Chart's release name as a prefix to every service name. If we remove that, we would more closely align with each component's service name being the same as the deployment name.

puckpuck avatar Mar 01 '24 20:03 puckpuck