calico
calico copied to clipboard
only ipv6 monitor error
I am using helm to install tigera-operator. Because I need to monitor, I have opened nodeMetricsPort: 9091, but in the only ipv6 environment, address="fddd:3bcc:a689::66" is found in the Input Labels captured by prometheus, resulting in The monitoring rule failed. Normally it should be address="[fddd:3bcc:a689::66]:$port", so I hope to inject a port configuration into it:
ports:
- containerPort: 9091
hostPort: 9091
protocol:TCP
But the injection failed. I know it may be a problem with my configuration, but I can't find an example. My wrong configuration is:
installation:
registry: aetest.com:5000
calicoNetwork:
ipPools:
-blockSize: 122
cidr: fcaa:8888::/64
encapsulation: VXLAN
natOutgoing: Enabled
nodeSelector: all()
mtu: 0
nodeAddressAutodetectionV6:
interface: eth.*|en.*
enabled: true
kubeletVolumePluginPath: /data/k8sbasedata/kubelet
kubernetesProvider: ""
nodeMetricsPort: 9091
typhaMetricsPort: 9093
CalicoNodeDaemonSet:
template:
containers:
ports:
- containerPort: 9091
hostPort: 9091
protocol:TCP
Hm, setting that field should enable the following annotations on the calico/node DaemonSet:
https://github.com/tigera/operator/blob/e5880ff2edf627c98cf12ad29f24bcab011be78d/pkg/render/node.go#L906-L909
I would expect Prometheus to pick up on those, but perhaps we also need to add the containerPort to the spec when that happens?
Right now, the only options available for CalicoNodeDaemonSet would be the resource requests and requirements, per the API: https://github.com/tigera/operator/blob/e5880ff2edf627c98cf12ad29f24bcab011be78d/api/v1/calico_node_types.go#L25-L37
Likely not the problem, but CalicoNodeDaemonSet:
should be lower case: calicoNodeDaemonSet
Hm, setting that field should enable the following annotations on the calico/node DaemonSet:
https://github.com/tigera/operator/blob/e5880ff2edf627c98cf12ad29f24bcab011be78d/pkg/render/node.go#L906-L909
I would expect Prometheus to pick up on those, but perhaps we also need to add the containerPort to the spec when that happens?
Right now, the only options available for CalicoNodeDaemonSet would be the resource requests and requirements, per the API: https://github.com/tigera/operator/blob/e5880ff2edf627c98cf12ad29f24bcab011be78d/api/v1/calico_node_types.go#L25-L37
Likely not the problem, but
CalicoNodeDaemonSet:
should be lower case:calicoNodeDaemonSet
In the current ipv6 environment, if you add the containerPort, the Input Labels are address="[fddd:3bcc:a689::66]:$port". prometheus's relabel_configs can capture and replace it
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_pod_annotation_prometheus_io_port
target_label: __address__
- action: replace
regex: (\[[^]]+\])(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_pod_annotation_prometheus_io_port
target_label: __address__
At present, I use a less rigorous relabel_configs and it can meet the usage requirements, but I don’t know if there will be problems in the future.
- action: replace
regex: '^([a-fA-F0-9:]{2,})$'
replacement: '[$1]'
source_labels:
- __address__
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_pod_annotation_prometheus_io_port
target_label: __address__
- action: replace
regex: (\[[^]]+\])(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_pod_annotation_prometheus_io_port
target_label: __address__
So I think if you can add the containerPort, it is the simplest solution.