opentelemetry-helm-charts
opentelemetry-helm-charts copied to clipboard
Disabled ports are still enabled in the generated service
I have the following configuration:
mode: daemonset
ports:
# Disable default ports.
jaeger-compact:
enabled: false
jaeger-grpc:
enabled: false
jaeger-thrift:
enabled: false
otlp:
enabled: false
otlp-http:
enabled: false
zipkin:
enabled: false
service:
enabled: true
From this, I'd expect these ports to be totally disabled and not configured. This is backed up by the Helm configuration for this:
- In
service.yaml
{{- $ports := include "opentelemetry-collector.servicePortsConfig" . }}
{{- if $ports }}
ports:
{{- $ports | nindent 4}}
{{- end }}
- In
_config.tpl
{{- define "opentelemetry-collector.servicePortsConfig" -}}
{{- $ports := deepCopy .Values.ports }}
{{- range $key, $port := $ports }}
{{- if $port.enabled }}
- name: {{ $key }}
port: {{ $port.servicePort }}
targetPort: {{ $port.containerPort }}
protocol: {{ $port.protocol }}
{{- if $port.appProtocol }}
appProtocol: {{ $port.appProtocol }}
{{- end }}
{{- if $port.nodePort }}
nodePort: {{ $port.nodePort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
However, upon running with service.enabled: true, I can see that these disabled ports are still being marked as exposed on the generated Service resource:
➜ ~ kubectl describe service -n test otel
...
Port: jaeger-compact 6831/UDP
TargetPort: 6831/UDP
Endpoints: xxx.xxx.xxx.xxx:6831,xxx.xxx.xxx.xxx:6831,xxx.xxx.xxx.xxx:6831
Port: jaeger-grpc 14250/TCP
TargetPort: 14250/TCP
Endpoints: xxx.xxx.xxx.xxx:14250,xxx.xxx.xxx.xxx:14250,xxx.xxx.xxx.xxx:14250
Port: jaeger-thrift 14268/TCP
TargetPort: 14268/TCP
Endpoints: xxx.xxx.xxx.xxx:14268,xxx.xxx.xxx.xxx:14268,xxx.xxx.xxx.xxx:14268
Port: otlp 4317/TCP
TargetPort: 4317/TCP
Endpoints: xxx.xxx.xxx.xxx:4317,xxx.xxx.xxx.xxx:4317,xxx.xxx.xxx.xxx:4317
Port: otlp-http 4318/TCP
TargetPort: 4318/TCP
Endpoints: xxx.xxx.xxx.xxx:4318,xxx.xxx.xxx.xxx:4318,xxx.xxx.xxx.xxx:4318
Port: zipkin 9411/TCP
TargetPort: 9411/TCP
Endpoints: xxx.xxx.xxx.xxx:9411,xxx.xxx.xxx.xxx:9411,xxx.xxx.xxx.xxx:9411
Session Affinity: None
Events: <none>
The config is being pulled in from my values.yaml as adding new ports still shows up in this list as expected.
The redacted IP addresses in this case are being allocated via my network CNI plugin, thus are being allocated externally-facing IP addresses on my network running my cluster, which is not desired behaviour.
- Image: otel/opentelemetry-collector-contrib:0.109.0
- Chart: 0.105.1
Hmm, we do have a check for checking if port is enabled: https://github.com/open-telemetry/opentelemetry-helm-charts/blob/main/charts/opentelemetry-collector/templates/_config.tpl#L251-L267
Maybe some issue with your values.yaml? Are you using as a subchart? Could you share it?
I'm just using the latest chart off of the releases with the settings above (other settings are present that I would need to redact when I am at my work machine next, but it shouldn't impact this looking at the logic). No other modifications are present.
I tried to reproduce this but wasn't able to with your provided values.
Values
mode: daemonset
image:
repository: "otel/opentelemetry-collector-k8s"
command:
name: "otelcol-k8s"
ports:
# Disable default ports.
jaeger-compact:
enabled: false
jaeger-grpc:
enabled: false
jaeger-thrift:
enabled: false
otlp:
enabled: false
otlp-http:
enabled: false
zipkin:
enabled: false
service:
enabled: true
Command:
helm template otelcol open-telemetry/opentelemetry-collector --version v0.105.1 --values ./values.yaml > test.yml
That gave me the following k8s service:
apiVersion: v1
kind: Service
metadata:
name: otelcol-opentelemetry-collector
namespace: default
labels:
helm.sh/chart: opentelemetry-collector-0.105.1
app.kubernetes.io/name: opentelemetry-collector
app.kubernetes.io/instance: otelcol
app.kubernetes.io/version: "0.109.0"
app.kubernetes.io/managed-by: Helm
component: agent-collector
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: opentelemetry-collector
app.kubernetes.io/instance: otelcol
component: agent-collector
internalTrafficPolicy: Local
As you can see, no ports were generated.
i am unable to produce this issue locally as well.
are you still running into this @ascopes ?
I will have to check next week when I am at my work machine! I will let you know.
Looks like it is working now... that is really bizarre! Thanks for the support.