terraform-provider-rancher2
terraform-provider-rancher2 copied to clipboard
rancher2_app_v2 always suggests changes with same values.yml
I'm facing a similar, if not the same, issue with this resource rancher2_app_v2 mentioned in rancher/terraform-provider-rancher2#500
I'm always getting changes on a plan/apply though the values file is static.
I'm using a custom rancher-monitoring values.yml, and i'm working a Rancher v2.5.5 HA installation.
$ terraform version
Terraform v0.14.5
+ provider registry.terraform.io/rancher/rancher2 v1.11.0 # was using v1.10.6 before, same behaviour
$ md5sum apps/apps_values/k8s-gke-dev/rancher-monitoring/values.yml
9aa061929b2eeab98d0a907d280103ee apps/apps_values/k8s-gke-dev/rancher-monitoring/values.yml
$ cat 5_apps.tf
resource "rancher2_app_v2" "dev_monitoring" {
cluster_id = "c-abcde"
name = "rancher-monitoring"
namespace = "cattle-monitoring-system"
repo_name = "rancher-charts"
chart_name = "rancher-monitoring"
chart_version = "9.4.202"
values = file("apps/apps_values/k8s-gke-dev/rancher-monitoring/values.yml")
}
1st terraform apply
Erase is control-H (^H).
$ clear
$ terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# rancher2_app_v2.dev_monitoring will be updated in-place
~ resource "rancher2_app_v2" "dev_monitoring" {
id = "c-abcde.cattle-monitoring-system/rancher-monitoring"
name = "rancher-monitoring"
~ values = <<-EOT
- additionalPrometheusRules: null
+ prometheus-adapter:
+ enabled: true
+ prometheus:
+ url: http://rancher-monitoring-prometheus.cattle-monitoring-system.svc
+ port: 9090
+ image:
+ repository: rancher/directxman12-k8s-prometheus-adapter-amd64
+ tag: v0.7.0
+ pullPolicy: IfNotPresent
+ pullSecrets: {}
+ psp:
+ create: true
+ rkeControllerManager:
+ enabled: false
+ metricsPort: 10252
+ component: kube-controller-manager
+ clients:
+ port: 10011
+ useLocalhost: true
+ nodeSelector:
+ node-role.kubernetes.io/controlplane: "true"
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ rkeScheduler:
+ enabled: false
+ metricsPort: 10251
+ component: kube-scheduler
+ clients:
+ port: 10012
+ useLocalhost: true
+ nodeSelector:
+ node-role.kubernetes.io/controlplane: "true"
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ rkeProxy:
+ enabled: false
+ metricsPort: 10249
+ component: kube-proxy
+ clients:
+ port: 10013
+ useLocalhost: true
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ rkeEtcd:
+ enabled: false
+ metricsPort: 2379
+ component: kube-etcd
+ clients:
+ port: 10014
+ https:
+ enabled: true
+ certDir: /etc/kubernetes/ssl
+ certFile: kube-etcd-*.pem
+ keyFile: kube-etcd-*-key.pem
+ caCertFile: kube-ca.pem
+ nodeSelector:
+ node-role.kubernetes.io/etcd: "true"
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ k3sServer:
+ enabled: false
+ metricsPort: 10249
+ component: k3s-server
+ clients:
+ port: 10013
+ useLocalhost: true
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ kubeAdmControllerManager:
+ enabled: false
+ metricsPort: 10257
+ component: kube-controller-manager
+ clients:
+ port: 10011
+ useLocalhost: true
+ https:
+ enabled: true
+ useServiceAccountCredentials: true
+ insecureSkipVerify: true
+ nodeSelector:
+ node-role.kubernetes.io/master: ""
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ kubeAdmScheduler:
+ enabled: false
+ metricsPort: 10259
+ component: kube-scheduler
+ clients:
+ port: 10012
+ useLocalhost: true
+ https:
+ enabled: true
+ useServiceAccountCredentials: true
+ insecureSkipVerify: true
+ nodeSelector:
+ node-role.kubernetes.io/master: ""
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ kubeAdmProxy:
+ enabled: false
+ metricsPort: 10249
+ component: kube-proxy
+ clients:
+ port: 10013
+ useLocalhost: true
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ kubeAdmEtcd:
+ enabled: false
+ metricsPort: 2381
+ component: kube-etcd
+ clients:
+ port: 10014
+ useLocalhost: true
+ nodeSelector:
+ node-role.kubernetes.io/master: ""
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ rke2ControllerManager:
+ enabled: false
+ metricsPort: 10252
+ component: kube-controller-manager
+ clients:
+ port: 10011
+ useLocalhost: true
+ nodeSelector:
+ node-role.kubernetes.io/master: "true"
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ rke2Scheduler:
+ enabled: false
+ metricsPort: 10251
+ component: kube-scheduler
+ clients:
+ port: 10012
+ useLocalhost: true
+ nodeSelector:
+ node-role.kubernetes.io/master: "true"
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ rke2Proxy:
+ enabled: false
+ metricsPort: 10249
+ component: kube-proxy
+ clients:
+ port: 10013
+ useLocalhost: true
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ rke2Etcd:
+ enabled: false
+ metricsPort: 2381
+ component: kube-etcd
+ clients:
+ port: 10014
+ useLocalhost: true
+ nodeSelector:
+ node-role.kubernetes.io/etcd: "true"
+ tolerations:
+ - effect: NoSchedule
+ key: node-role.kubernetes.io/master
+ operator: Equal
+ nameOverride: rancher-monitoring
+ namespaceOverride: cattle-monitoring-system
+ kubeTargetVersionOverride: ""
+ fullnameOverride: ""
+ commonLabels: {}
+ defaultRules:
+ create: true
+ rules:
+ alertmanager: true
+ etcd: true
+ general: true
+ k8s: true
+ kubeApiserver: true
+ kubeApiserverAvailability: true
+ kubeApiserverError: true
+ kubeApiserverSlos: true
+ kubelet: true
+ kubePrometheusGeneral: true
+ kubePrometheusNodeAlerting: true
+ kubePrometheusNodeRecording: true
+ kubernetesAbsent: true
+ kubernetesApps: true
+ kubernetesResources: true
+ kubernetesStorage: true
+ kubernetesSystem: true
+ kubeScheduler: true
+ kubeStateMetrics: true
+ network: true
+ node: true
+ prometheus: true
+ prometheusOperator: true
+ time: true
+ runbookUrl: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#
+ appNamespacesTarget: .*
+ labels: {}
+ annotations: {}
+ additionalPrometheusRules: []
+ global:
+ cattle:
+ systemDefaultRegistry: ""
+ kubectl:
+ repository: rancher/kubectl
+ tag: v1.18.6
+ pullPolicy: IfNotPresent
+ rbac:
+ create: true
+ userRoles:
+ create: true
+ aggregateToDefaultRoles: true
+ pspEnabled: true
+ pspAnnotations: {}
+ imagePullSecrets: []
alertmanager:
- alertmanagerSpec:
- additionalPeers: null
- affinity: {}
- configMaps: null
- containers: null
- externalUrl: null
- image:
- repository: rancher/prom-alertmanager
- sha: ""
- tag: v0.21.0
- listenLocal: false
- logFormat: logfmt
- logLevel: info
- nodeSelector: {}
- paused: false
- podAntiAffinity: ""
- podAntiAffinityTopologyKey: kubernetes.io/hostname
- podMetadata: {}
- portName: web
- priorityClassName: ""
- replicas: 1
- resources:
- limits:
- cpu: 1000m
- memory: 500Mi
- requests:
- cpu: 100m
- memory: 100Mi
- retention: 120h
- routePrefix: /
- secrets: null
- securityContext:
- fsGroup: 2000
- runAsGroup: 2000
- runAsNonRoot: true
- runAsUser: 1000
- storage: {}
- tolerations: null
- useExistingSecret: false
+ enabled: true
apiVersion: v2
+ serviceAccount:
+ create: true
+ name: ""
+ annotations: {}
+ podDisruptionBudget:
+ enabled: false
+ minAvailable: 1
+ maxUnavailable: ""
config:
global:
resolve_timeout: 5m
- receivers:
- - name: "null"
route:
group_by:
- job
- group_interval: 5m
group_wait: 30s
- receiver: "null"
+ group_interval: 5m
repeat_interval: 12h
+ receiver: "null"
routes:
- match:
alertname: Watchdog
receiver: "null"
+ receivers:
+ - name: "null"
templates:
- /etc/alertmanager/config/*.tmpl
- enabled: true
- ingress:
- annotations: {}
- enabled: false
- hosts: null
- labels: {}
- paths: null
- tls: null
- ingressPerReplica:
- annotations: {}
- enabled: false
- hostDomain: ""
- hostPrefix: ""
- labels: {}
- paths: null
- tlsSecretName: ""
- tlsSecretPerReplica:
- enabled: false
- prefix: alertmanager
- podDisruptionBudget:
- enabled: false
- maxUnavailable: ""
- minAvailable: 1
- secret:
- annotations: {}
- cleanupOnUninstall: false
- image:
- pullPolicy: IfNotPresent
- repository: rancher/rancher-agent
- tag: v2.4.8
- securityContext:
- runAsNonRoot: true
- runAsUser: 1000
- service:
- annotations: {}
- clusterIP: ""
- externalIPs: null
- labels: {}
- loadBalancerIP: ""
- loadBalancerSourceRanges: null
- nodePort: 30903
- port: 9093
- targetPort: 9093
- type: ClusterIP
- serviceAccount:
- annotations: {}
- create: true
- name: ""
- serviceMonitor:
- interval: ""
- metricRelabelings: null
- relabelings: null
- selfMonitor: true
- servicePerReplica:
- annotations: {}
- enabled: false
- loadBalancerSourceRanges: null
- nodePort: 30904
- port: 9093
- targetPort: 9093
- type: ClusterIP
+ tplConfig: false
templateFiles:
rancher_defaults.tmpl: |-
{{- define "slack.rancher.text" -}}
{{ template "rancher.text_multiple" . }}
{{- end -}}
{{- define "rancher.text_multiple" -}}
*[GROUP - Details]*
One or more alarms in this group have triggered a notification.
{{- if gt (len .GroupLabels.Values) 0 }}
*Group Labels:*
{{- range .GroupLabels.SortedPairs }}
• *{{ .Name }}:* `{{ .Value }}`
{{- end }}
{{- end }}
{{- if .ExternalURL }}
*Link to AlertManager:* {{ .ExternalURL }}
{{- end }}
{{- range .Alerts }}
{{ template "rancher.text_single" . }}
{{- end }}
{{- end -}}
{{- define "rancher.text_single" -}}
{{- if .Labels.alertname }}
*[ALERT - {{ .Labels.alertname }}]*
{{- else }}
*[ALERT]*
{{- end }}
{{- if .Labels.severity }}
*Severity:* `{{ .Labels.severity }}`
{{- end }}
{{- if .Labels.cluster }}
*Cluster:* {{ .Labels.cluster }}
{{- end }}
{{- if .Annotations.summary }}
*Summary:* {{ .Annotations.summary }}
{{- end }}
{{- if .Annotations.message }}
*Message:* {{ .Annotations.message }}
{{- end }}
{{- if .Annotations.description }}
*Description:* {{ .Annotations.description }}
{{- end }}
{{- if .Annotations.runbook_url }}
*Runbook URL:* <{{ .Annotations.runbook_url }}|:spiral_note_pad:>
{{- end }}
{{- with .Labels }}
{{- with .Remove (stringSlice "alertname" "severity" "cluster") }}
{{- if gt (len .) 0 }}
*Additional Labels:*
{{- range .SortedPairs }}
• *{{ .Name }}:* `{{ .Value }}`
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- with .Annotations }}
{{- with .Remove (stringSlice "summary" "message" "description" "runbook_url") }}
{{- if gt (len .) 0 }}
*Additional Annotations:*
{{- range .SortedPairs }}
• *{{ .Name }}:* `{{ .Value }}`
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end -}}
- tplConfig: false
- commonLabels: {}
- coreDns:
- enabled: true
+ ingress:
+ enabled: false
+ annotations: {}
+ labels: {}
+ hosts: []
+ paths: []
+ tls: []
+ secret:
+ cleanupOnUninstall: false
+ image:
+ repository: rancher/rancher-agent
+ tag: v2.4.8
+ pullPolicy: IfNotPresent
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: 1000
+ annotations: {}
+ ingressPerReplica:
+ enabled: false
+ annotations: {}
+ labels: {}
+ hostPrefix: ""
+ hostDomain: ""
+ paths: []
+ tlsSecretName: ""
+ tlsSecretPerReplica:
+ enabled: false
+ prefix: alertmanager
service:
- port: 9153
- targetPort: 9153
+ annotations: {}
+ labels: {}
+ clusterIP: ""
+ port: 9093
+ targetPort: 9093
+ nodePort: 30903
+ externalIPs: []
+ loadBalancerIP: ""
+ loadBalancerSourceRanges: []
+ type: ClusterIP
+ servicePerReplica:
+ enabled: false
+ annotations: {}
+ port: 9093
+ targetPort: 9093
+ nodePort: 30904
+ loadBalancerSourceRanges: []
+ type: ClusterIP
serviceMonitor:
interval: ""
- metricRelabelings: null
- relabelings: null
- defaultRules:
- annotations: {}
- appNamespacesTarget: .*
- create: true
- labels: {}
- rules:
- alertmanager: true
- etcd: true
- general: true
- k8s: true
- kubeApiserver: true
- kubeApiserverAvailability: true
- kubeApiserverError: true
- kubeApiserverSlos: true
- kubePrometheusGeneral: true
- kubePrometheusNodeAlerting: true
- kubePrometheusNodeRecording: true
- kubeScheduler: true
- kubeStateMetrics: true
- kubelet: true
- kubernetesAbsent: true
- kubernetesApps: true
- kubernetesResources: true
- kubernetesStorage: true
- kubernetesSystem: true
- network: true
- node: true
- prometheus: true
- prometheusOperator: true
- time: true
- runbookUrl: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#
- fullnameOverride: ""
- global:
- cattle:
- clusterId: c-abcde
- clusterName: k8s-gke-dev
- systemDefaultRegistry: ""
- imagePullSecrets: null
- kubectl:
- pullPolicy: IfNotPresent
- repository: rancher/kubectl
- tag: v1.18.6
- rbac:
- create: true
- pspAnnotations: {}
- pspEnabled: true
- userRoles:
- aggregateToDefaultRoles: true
- create: true
+ selfMonitor: true
+ metricRelabelings: []
+ relabelings: []
+ alertmanagerSpec:
+ podMetadata: {}
+ image:
+ repository: rancher/prom-alertmanager
+ tag: v0.21.0
+ sha: ""
+ useExistingSecret: false
+ secrets: []
+ configMaps: []
+ logFormat: logfmt
+ logLevel: info
+ replicas: 1
+ retention: 120h
+ storage: {}
+ externalUrl: null
+ routePrefix: /
+ paused: false
+ nodeSelector: {}
+ resources:
+ limits:
+ memory: 500Mi
+ cpu: 1000m
+ requests:
+ memory: 100Mi
+ cpu: 100m
+ podAntiAffinity: ""
+ podAntiAffinityTopologyKey: kubernetes.io/hostname
+ affinity: {}
+ tolerations: []
+ securityContext:
+ runAsGroup: 2000
+ runAsNonRoot: true
+ runAsUser: 1000
+ fsGroup: 2000
+ listenLocal: false
+ containers: []
+ priorityClassName: ""
+ additionalPeers: []
+ portName: web
grafana:
- additionalDataSources: null
- adminPassword: prom-operator
- defaultDashboardsEnabled: true
+ enabled: true
+ namespaceOverride: ""
+ grafana.ini:
+ users:
+ auto_assign_org_role: Viewer
+ auth:
+ disable_login_form: false
+ auth.anonymous:
+ enabled: true
+ org_role: Viewer
+ auth.basic:
+ enabled: false
+ dashboards:
+ default_home_dashboard_path: /tmp/dashboards/rancher-default-home.json
deploymentStrategy:
type: Recreate
- enabled: true
- extraConfigmapMounts: null
- extraContainerVolumes:
- - emptyDir: {}
- name: nginx-home
- - configMap:
- items:
- - key: nginx.conf
- mode: 438
- path: nginx.conf
- name: grafana-nginx-proxy-config
- name: grafana-nginx
+ defaultDashboardsEnabled: true
+ adminPassword: prom-operator
+ ingress:
+ enabled: false
+ annotations: {}
+ labels: {}
+ hosts: []
+ path: /
+ tls: []
+ sidecar:
+ dashboards:
+ enabled: true
+ label: grafana_dashboard
+ searchNamespace: cattle-dashboards
+ annotations: {}
+ datasources:
+ enabled: true
+ defaultDatasourceEnabled: true
+ annotations: {}
+ createPrometheusReplicasDatasources: false
+ label: grafana_datasource
+ extraConfigmapMounts: []
+ additionalDataSources: []
+ service:
+ portName: nginx-http
+ port: 80
+ targetPort: 8080
+ nodePort: 30950
+ type: ClusterIP
+ proxy:
+ image:
+ repository: rancher/library-nginx
+ tag: 1.19.2-alpine
extraContainers: |
- name: grafana-proxy
args:
- nginx
- -g
- daemon off;
- -c
- /nginx/nginx.conf
image: "{{ template "system_default_registry" . }}{{ .Values.proxy.image.repository }}:{{ .Values.proxy.image.tag }}"
ports:
- containerPort: 8080
name: nginx-http
protocol: TCP
volumeMounts:
- mountPath: /nginx
name: grafana-nginx
- mountPath: /var/cache/nginx
name: nginx-home
securityContext:
runAsUser: 101
runAsGroup: 101
- grafana.ini:
- auth:
- disable_login_form: false
- auth.anonymous:
- enabled: true
- org_role: Viewer
- auth.basic:
- enabled: false
- dashboards:
- default_home_dashboard_path: /tmp/dashboards/rancher-default-home.json
- users:
- auto_assign_org_role: Viewer
- ingress:
- annotations: {}
- enabled: false
- hosts: null
- labels: {}
- path: /
- tls: null
- namespaceOverride: ""
- proxy:
- image:
- repository: rancher/library-nginx
- tag: 1.19.2-alpine
- resources:
- limits:
- cpu: 200m
- memory: 200Mi
- requests:
- cpu: 100m
- memory: 100Mi
- service:
- nodePort: 30950
- port: 80
- portName: nginx-http
- targetPort: 8080
- type: ClusterIP
+ extraContainerVolumes:
+ - name: nginx-home
+ emptyDir: {}
+ - name: grafana-nginx
+ configMap:
+ name: grafana-nginx-proxy-config
+ items:
+ - key: nginx.conf
+ mode: 438
+ path: nginx.conf
serviceMonitor:
interval: ""
- metricRelabelings: null
- relabelings: null
selfMonitor: true
- sidecar:
- dashboards:
- annotations: {}
- enabled: true
- label: grafana_dashboard
- searchNamespace: cattle-dashboards
- datasources:
- annotations: {}
- createPrometheusReplicasDatasources: false
- defaultDatasourceEnabled: true
- enabled: true
- label: grafana_datasource
- k3sServer:
- clients:
- port: 10013
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: k3s-server
- enabled: false
- metricsPort: 10249
- kube-state-metrics:
- namespaceOverride: ""
- podSecurityPolicy:
- enabled: true
- rbac:
- create: true
+ metricRelabelings: []
+ relabelings: []
resources:
limits:
- cpu: 100m
memory: 200Mi
+ cpu: 200m
requests:
+ memory: 100Mi
cpu: 100m
- memory: 130Mi
- kubeAdmControllerManager:
- clients:
- https:
- enabled: true
- insecureSkipVerify: true
- useServiceAccountCredentials: true
- nodeSelector:
- node-role.kubernetes.io/master: ""
- port: 10011
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-controller-manager
- enabled: false
- metricsPort: 10257
- kubeAdmEtcd:
- clients:
- nodeSelector:
- node-role.kubernetes.io/master: ""
- port: 10014
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-etcd
- enabled: false
- metricsPort: 2381
- kubeAdmProxy:
- clients:
- port: 10013
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-proxy
- enabled: false
- metricsPort: 10249
- kubeAdmScheduler:
- clients:
- https:
- enabled: true
- insecureSkipVerify: true
- useServiceAccountCredentials: true
- nodeSelector:
- node-role.kubernetes.io/master: ""
- port: 10012
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-scheduler
- enabled: false
- metricsPort: 10259
kubeApiServer:
enabled: true
- relabelings: null
+ tlsConfig:
+ serverName: kubernetes
+ insecureSkipVerify: false
+ relabelings: []
serviceMonitor:
interval: ""
jobLabel: component
- metricRelabelings: null
selector:
matchLabels:
component: apiserver
provider: kubernetes
- tlsConfig:
- insecureSkipVerify: false
- serverName: kubernetes
+ metricRelabelings: []
+ kubelet:
+ enabled: true
+ namespace: kube-system
+ serviceMonitor:
+ interval: ""
+ https: true
+ cAdvisor: true
+ probes: true
+ resource: true
+ resourcePath: /metrics/resource/v1alpha1
+ cAdvisorMetricRelabelings: []
+ probesMetricRelabelings: []
+ cAdvisorRelabelings:
+ - sourceLabels:
+ - __metrics_path__
+ targetLabel: metrics_path
+ probesRelabelings:
+ - sourceLabels:
+ - __metrics_path__
+ targetLabel: metrics_path
+ resourceRelabelings:
+ - sourceLabels:
+ - __metrics_path__
+ targetLabel: metrics_path
+ metricRelabelings: []
+ relabelings:
+ - sourceLabels:
+ - __metrics_path__
+ targetLabel: metrics_path
kubeControllerManager:
enabled: false
- endpoints: null
+ endpoints: []
service:
port: 10252
targetPort: 10252
serviceMonitor:
+ interval: ""
https: false
insecureSkipVerify: null
- interval: ""
- metricRelabelings: null
- relabelings: null
serverName: null
+ metricRelabelings: []
+ relabelings: []
+ coreDns:
+ enabled: true
+ service:
+ port: 9153
+ targetPort: 9153
+ serviceMonitor:
+ interval: ""
+ metricRelabelings: []
+ relabelings: []
kubeDns:
enabled: false
service:
dnsmasq:
port: 10054
targetPort: 10054
skydns:
port: 10055
targetPort: 10055
serviceMonitor:
- dnsmasqMetricRelabelings: null
- dnsmasqRelabelings: null
interval: ""
- metricRelabelings: null
- relabelings: null
+ metricRelabelings: []
+ relabelings: []
+ dnsmasqMetricRelabelings: []
+ dnsmasqRelabelings: []
kubeEtcd:
enabled: false
- endpoints: null
+ endpoints: []
service:
port: 2379
targetPort: 2379
serviceMonitor:
- caFile: ""
- certFile: ""
- insecureSkipVerify: false
interval: ""
- keyFile: ""
- metricRelabelings: null
- relabelings: null
scheme: http
+ insecureSkipVerify: false
serverName: ""
- kubeProxy:
- enabled: false
- endpoints: null
- service:
- port: 10249
- targetPort: 10249
- serviceMonitor:
- https: false
- interval: ""
- metricRelabelings: null
- relabelings: null
+ caFile: ""
+ certFile: ""
+ keyFile: ""
+ metricRelabelings: []
+ relabelings: []
kubeScheduler:
enabled: false
- endpoints: null
+ endpoints: []
service:
port: 10251
targetPort: 10251
serviceMonitor:
+ interval: ""
https: false
insecureSkipVerify: null
- interval: ""
- metricRelabelings: null
- relabelings: null
serverName: null
- kubeStateMetrics:
- enabled: true
+ metricRelabelings: []
+ relabelings: []
+ kubeProxy:
+ enabled: false
+ endpoints: []
+ service:
+ port: 10249
+ targetPort: 10249
serviceMonitor:
interval: ""
- metricRelabelings: null
- relabelings: null
- kubeTargetVersionOverride: ""
- kubelet:
+ https: false
+ metricRelabelings: []
+ relabelings: []
+ kubeStateMetrics:
enabled: true
- namespace: kube-system
serviceMonitor:
- cAdvisor: true
- cAdvisorMetricRelabelings: null
- cAdvisorRelabelings:
- - sourceLabels:
- - __metrics_path__
- targetLabel: metrics_path
- https: true
interval: ""
- metricRelabelings: null
- probes: true
- probesMetricRelabelings: null
- probesRelabelings:
- - sourceLabels:
- - __metrics_path__
- targetLabel: metrics_path
- relabelings:
- - sourceLabels:
- - __metrics_path__
- targetLabel: metrics_path
- resource: true
- resourcePath: /metrics/resource/v1alpha1
- resourceRelabelings:
- - sourceLabels:
- - __metrics_path__
- targetLabel: metrics_path
- nameOverride: rancher-monitoring
- namespaceOverride: cattle-monitoring-system
+ metricRelabelings: []
+ relabelings: []
+ kube-state-metrics:
+ namespaceOverride: ""
+ rbac:
+ create: true
+ podSecurityPolicy:
+ enabled: true
+ resources:
+ limits:
+ cpu: 100m
+ memory: 200Mi
+ requests:
+ cpu: 100m
+ memory: 130Mi
nodeExporter:
enabled: true
jobLabel: jobLabel
serviceMonitor:
interval: ""
- metricRelabelings: null
- relabelings: null
scrapeTimeout: ""
- prometheus:
- additionalPodMonitors: null
- additionalServiceMonitors: null
- annotations: {}
- enabled: true
- ingress:
- annotations: {}
- enabled: false
- hosts: null
- labels: {}
- paths: null
- tls: null
- ingressPerReplica:
- annotations: {}
- enabled: false
- hostDomain: ""
- hostPrefix: ""
- labels: {}
- paths: null
- tlsSecretName: ""
- tlsSecretPerReplica:
- enabled: false
- prefix: prometheus
- podDisruptionBudget:
- enabled: false
- maxUnavailable: ""
- minAvailable: 1
- podSecurityPolicy:
- allowedCapabilities: null
- prometheusSpec:
- additionalAlertManagerConfigs: null
- additionalAlertRelabelConfigs: null
- additionalPrometheusSecretsAnnotations: {}
- additionalScrapeConfigs: null
- additionalScrapeConfigsSecret: {}
- affinity: {}
- alertingEndpoints: null
- apiserverConfig: {}
- configMaps: null
- containers: |
- - name: prometheus-proxy
- args:
- - nginx
- - -g
- - daemon off;
- - -c
- - /nginx/nginx.conf
- image: "{{ template "system_default_registry" . }}{{ .Values.prometheus.prometheusSpec.proxy.image.repository }}:{{ .Values.prometheus.prometheusSpec.proxy.image.tag }}"
- ports:
- - containerPort: 8080
- name: nginx-http
- protocol: TCP
- volumeMounts:
- - mountPath: /nginx
- name: prometheus-nginx
- - mountPath: /var/cache/nginx
- name: nginx-home
- securityContext:
- runAsUser: 101
- runAsGroup: 101
- disableCompaction: false
- enableAdminAPI: false
- evaluationInterval: ""
- externalLabels: {}
- externalUrl: ""
- ignoreNamespaceSelectors: false
- image:
- repository: rancher/prom-prometheus
- sha: ""
- tag: v2.18.2
- initContainers: null
- listenLocal: false
- logFormat: logfmt
- logLevel: info
- nodeSelector: {}
- paused: false
- podAntiAffinity: ""
- podAntiAffinityTopologyKey: kubernetes.io/hostname
- podMetadata: {}
- podMonitorNamespaceSelector: {}
- podMonitorSelector: {}
- podMonitorSelectorNilUsesHelmValues: false
- portName: nginx-http
- priorityClassName: ""
- prometheusExternalLabelName: ""
- prometheusExternalLabelNameClear: false
- proxy:
- image:
- repository: rancher/library-nginx
- tag: 1.19.2-alpine
- query: {}
- remoteRead: null
- remoteWrite: null
- remoteWriteDashboards: false
- replicaExternalLabelName: ""
- replicaExternalLabelNameClear: false
- replicas: 1
- resources:
- limits:
- cpu: 1000m
- memory: 1500Mi
- requests:
- cpu: 750m
- memory: 750Mi
- retention: 10d
- retentionSize: ""
- routePrefix: /
- ruleNamespaceSelector: {}
- ruleSelector: {}
- ruleSelectorNilUsesHelmValues: false
- scrapeInterval: ""
- secrets: null
- securityContext:
- fsGroup: 2000
- runAsGroup: 2000
- runAsNonRoot: true
- runAsUser: 1000
- serviceMonitorNamespaceSelector: {}
- serviceMonitorSelector: {}
- serviceMonitorSelectorNilUsesHelmValues: false
- storageSpec: {}
- thanos: {}
- tolerations: null
- volumeMounts: null
- volumes:
- - emptyDir: {}
- name: nginx-home
- - configMap:
- defaultMode: 438
- name: prometheus-nginx-proxy-config
- name: prometheus-nginx
- walCompression: false
- service:
- annotations: {}
- clusterIP: ""
- externalIPs: null
- labels: {}
- loadBalancerIP: ""
- loadBalancerSourceRanges: null
- nodePort: 30090
- port: 9090
- sessionAffinity: ""
- targetPort: 8080
- type: ClusterIP
- serviceAccount:
- create: true
- name: ""
- serviceMonitor:
- bearerTokenFile: null
- interval: ""
- metricRelabelings: null
- relabelings: null
- scheme: ""
- selfMonitor: true
- tlsConfig: {}
- servicePerReplica:
- annotations: {}
- enabled: false
- loadBalancerSourceRanges: null
- nodePort: 30091
- port: 9090
- targetPort: 9090
- type: ClusterIP
- thanosIngress:
- annotations: {}
- enabled: false
- hosts: null
- labels: {}
- paths: null
- servicePort: 10901
- tls: null
- prometheus-adapter:
- enabled: true
- image:
- pullPolicy: IfNotPresent
- pullSecrets: {}
- repository: rancher/directxman12-k8s-prometheus-adapter-amd64
- tag: v0.7.0
- prometheus:
- port: 9090
- url: http://rancher-monitoring-prometheus.cattle-monitoring-system.svc
- psp:
- create: true
+ metricRelabelings: []
+ relabelings: []
prometheus-node-exporter:
- extraArgs:
- - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
- - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
namespaceOverride: ""
podLabels:
jobLabel: node-exporter
+ extraArgs:
+ - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
+ - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
+ service:
+ port: 9796
+ targetPort: 9796
resources:
limits:
cpu: 200m
memory: 50Mi
requests:
cpu: 100m
memory: 30Mi
- service:
- port: 9796
- targetPort: 9796
prometheusOperator:
- admissionWebhooks:
+ enabled: true
+ manageCrds: true
+ tlsProxy:
enabled: true
+ image:
+ repository: rancher/squareup-ghostunnel
+ tag: v1.5.2
+ sha: ""
+ pullPolicy: IfNotPresent
+ resources: {}
+ admissionWebhooks:
failurePolicy: Fail
+ enabled: true
patch:
- affinity: {}
enabled: true
image:
- pullPolicy: IfNotPresent
repository: rancher/jettech-kube-webhook-certgen
- sha: ""
tag: v1.2.1
- nodeSelector: {}
- podAnnotations: {}
- priorityClassName: ""
+ sha: ""
+ pullPolicy: IfNotPresent
resources: {}
- tolerations: null
- affinity: {}
- cleanupCustomResource: false
- configReloaderCpu: 100m
- configReloaderMemory: 25Mi
- configmapReloadImage:
- repository: rancher/jimmidyson-configmap-reload
- sha: ""
- tag: v0.3.0
+ priorityClassName: ""
+ podAnnotations: {}
+ nodeSelector: {}
+ affinity: {}
+ tolerations: []
+ namespaces: {}
+ denyNamespaces: []
+ serviceAccount:
+ create: true
+ name: ""
+ service:
+ annotations: {}
+ labels: {}
+ clusterIP: ""
+ nodePort: 30080
+ nodePortTls: 30443
+ additionalPorts: []
+ loadBalancerIP: ""
+ loadBalancerSourceRanges: []
+ type: ClusterIP
+ externalIPs: []
createCustomResource: true
- denyNamespaces: null
- enabled: true
- hostNetwork: false
- image:
- pullPolicy: IfNotPresent
- repository: rancher/coreos-prometheus-operator
- sha: ""
- tag: v0.38.1
+ cleanupCustomResource: false
+ podLabels: {}
+ podAnnotations: {}
kubeletService:
enabled: true
namespace: kube-system
- manageCrds: true
- namespaces: {}
- nodeSelector: {}
- podAnnotations: {}
- podLabels: {}
- prometheusConfigReloaderImage:
- repository: rancher/coreos-prometheus-config-reloader
- sha: ""
- tag: v0.38.1
+ serviceMonitor:
+ interval: ""
+ scrapeTimeout: ""
+ selfMonitor: true
+ metricRelabelings: []
+ relabelings: []
resources:
limits:
cpu: 200m
memory: 500Mi
requests:
cpu: 100m
memory: 100Mi
- secretFieldSelector: ""
+ hostNetwork: false
+ nodeSelector: {}
+ tolerations: []
+ affinity: {}
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
+ image:
+ repository: rancher/coreos-prometheus-operator
+ tag: v0.38.1
+ sha: ""
+ pullPolicy: IfNotPresent
+ configmapReloadImage:
+ repository: rancher/jimmidyson-configmap-reload
+ tag: v0.3.0
+ sha: ""
+ prometheusConfigReloaderImage:
+ repository: rancher/coreos-prometheus-config-reloader
+ tag: v0.38.1
+ sha: ""
+ configReloaderCpu: 100m
+ configReloaderMemory: 25Mi
+ secretFieldSelector: ""
+ prometheus:
+ enabled: true
+ annotations: {}
+ serviceAccount:
+ create: true
+ name: ""
service:
- additionalPorts: null
annotations: {}
- clusterIP: ""
- externalIPs: null
labels: {}
+ clusterIP: ""
+ port: 9090
+ targetPort: 8080
+ externalIPs: []
+ nodePort: 30090
loadBalancerIP: ""
- loadBalancerSourceRanges: null
- nodePort: 30080
- nodePortTls: 30443
+ loadBalancerSourceRanges: []
type: ClusterIP
- serviceAccount:
- create: true
- name: ""
+ sessionAffinity: ""
+ servicePerReplica:
+ enabled: false
+ annotations: {}
+ port: 9090
+ targetPort: 9090
+ nodePort: 30091
+ loadBalancerSourceRanges: []
+ type: ClusterIP
+ podDisruptionBudget:
+ enabled: false
+ minAvailable: 1
+ maxUnavailable: ""
+ thanosIngress:
+ enabled: false
+ annotations: {}
+ labels: {}
+ servicePort: 10901
+ hosts: []
+ paths: []
+ tls: []
+ ingress:
+ enabled: false
+ annotations: {}
+ labels: {}
+ hosts: []
+ paths: []
+ tls: []
+ ingressPerReplica:
+ enabled: false
+ annotations: {}
+ labels: {}
+ hostPrefix: ""
+ hostDomain: ""
+ paths: []
+ tlsSecretName: ""
+ tlsSecretPerReplica:
+ enabled: false
+ prefix: prometheus
+ podSecurityPolicy:
+ allowedCapabilities: []
serviceMonitor:
interval: ""
- metricRelabelings: null
- relabelings: null
- scrapeTimeout: ""
selfMonitor: true
- tlsProxy:
- enabled: true
+ scheme: ""
+ tlsConfig: {}
+ bearerTokenFile: null
+ metricRelabelings: []
+ relabelings: []
+ prometheusSpec:
+ disableCompaction: false
+ apiserverConfig: {}
+ scrapeInterval: ""
+ evaluationInterval: ""
+ listenLocal: false
+ enableAdminAPI: false
image:
- pullPolicy: IfNotPresent
- repository: rancher/squareup-ghostunnel
+ repository: rancher/prom-prometheus
+ tag: v2.18.2
sha: ""
- tag: v1.5.2
- resources: {}
- tolerations: null
- rke2ControllerManager:
- clients:
- nodeSelector:
- node-role.kubernetes.io/master: "true"
- port: 10011
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-controller-manager
- enabled: false
- metricsPort: 10252
- rke2Etcd:
- clients:
- nodeSelector:
- node-role.kubernetes.io/etcd: "true"
- port: 10014
- tolerations:
- - effect: NoSchedule
- key: node-role.kubernetes.io/master
- operator: Equal
- useLocalhost: true
- component: kube-etcd
- enabled: false
- metricsPort: 2381
- rke2Proxy:
- clients:
- port: 10013
- useLocalhost: true
- component: kube-proxy
- enabled: false
- metricsPort: 10249
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- rke2Scheduler:
- clients:
- nodeSelector:
- node-role.kubernetes.io/master: "true"
- port: 10012
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-scheduler
- enabled: false
- metricsPort: 10251
- rkeControllerManager:
- clients:
- nodeSelector:
- node-role.kubernetes.io/controlplane: "true"
- port: 10011
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-controller-manager
- enabled: false
- metricsPort: 10252
- rkeEtcd:
- clients:
- https:
- caCertFile: kube-ca.pem
- certDir: /etc/kubernetes/ssl
- certFile: kube-etcd-*.pem
- enabled: true
- keyFile: kube-etcd-*-key.pem
- nodeSelector:
- node-role.kubernetes.io/etcd: "true"
- port: 10014
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- component: kube-etcd
- enabled: false
- metricsPort: 2379
- rkeProxy:
- clients:
- port: 10013
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-proxy
- enabled: false
- metricsPort: 10249
- rkeScheduler:
- clients:
- nodeSelector:
- node-role.kubernetes.io/controlplane: "true"
- port: 10012
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-scheduler
- enabled: false
- metricsPort: 10251
+ tolerations: []
+ alertingEndpoints: []
+ externalLabels: {}
+ replicaExternalLabelName: ""
+ replicaExternalLabelNameClear: false
+ prometheusExternalLabelName: ""
+ prometheusExternalLabelNameClear: false
+ externalUrl: ""
+ ignoreNamespaceSelectors: false
+ nodeSelector: {}
+ secrets: []
+ configMaps: []
+ query: {}
+ ruleNamespaceSelector: {}
+ ruleSelectorNilUsesHelmValues: false
+ ruleSelector: {}
+ serviceMonitorSelectorNilUsesHelmValues: false
+ serviceMonitorSelector: {}
+ serviceMonitorNamespaceSelector: {}
+ podMonitorSelectorNilUsesHelmValues: false
+ podMonitorSelector: {}
+ podMonitorNamespaceSelector: {}
+ retention: 10d
+ retentionSize: ""
+ walCompression: false
+ paused: false
+ replicas: 1
+ logLevel: info
+ logFormat: logfmt
+ routePrefix: /
+ podMetadata: {}
+ podAntiAffinity: ""
+ podAntiAffinityTopologyKey: kubernetes.io/hostname
+ affinity: {}
+ remoteRead: []
+ remoteWrite: []
+ remoteWriteDashboards: false
+ resources:
+ limits:
+ memory: 1500Mi
+ cpu: 1000m
+ requests:
+ memory: 750Mi
+ cpu: 750m
+ storageSpec: {}
+ additionalScrapeConfigs: []
+ additionalScrapeConfigsSecret: {}
+ additionalPrometheusSecretsAnnotations: {}
+ additionalAlertManagerConfigs: []
+ additionalAlertRelabelConfigs: []
+ securityContext:
+ runAsGroup: 2000
+ runAsNonRoot: true
+ runAsUser: 1000
+ fsGroup: 2000
+ priorityClassName: ""
+ thanos: {}
+ proxy:
+ image:
+ repository: rancher/library-nginx
+ tag: 1.19.2-alpine
+ containers: |
+ - name: prometheus-proxy
+ args:
+ - nginx
+ - -g
+ - daemon off;
+ - -c
+ - /nginx/nginx.conf
+ image: "{{ template "system_default_registry" . }}{{ .Values.prometheus.prometheusSpec.proxy.image.repository }}:{{ .Values.prometheus.prometheusSpec.proxy.image.tag }}"
+ ports:
+ - containerPort: 8080
+ name: nginx-http
+ protocol: TCP
+ volumeMounts:
+ - mountPath: /nginx
+ name: prometheus-nginx
+ - mountPath: /var/cache/nginx
+ name: nginx-home
+ securityContext:
+ runAsUser: 101
+ runAsGroup: 101
+ volumes:
+ - name: nginx-home
+ emptyDir: {}
+ - name: prometheus-nginx
+ configMap:
+ name: prometheus-nginx-proxy-config
+ defaultMode: 438
+ volumeMounts: []
+ initContainers: []
+ portName: nginx-http
+ additionalServiceMonitors: []
+ additionalPodMonitors: []
EOT
# (13 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value:yes
rancher2_app_v2.dev_monitoring: Modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring]
rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 10s elapsed]
rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 20s elapsed]
rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 30s elapsed]
rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 40s elapsed]
rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 50s elapsed]
rancher2_app_v2.dev_monitoring: Modifications complete after 57s [id=c-abcde.cattle-monitoring-system/rancher-monitoring]
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
$ md5sum apps/apps_values/k8s-gke-dev/rancher-monitoring/values.yml
9aa061929b2eeab98d0a907d280103ee apps/apps_values/k8s-gke-dev/rancher-monitoring/values.yml
2nd terraform apply
$ terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# rancher2_app_v2.dev_monitoring will be updated in-place
~ resource "rancher2_app_v2" "dev_monitoring" {
id = "c-abcde.cattle-monitoring-system/rancher-monitoring"
name = "rancher-monitoring"
~ values = <<-EOT
- additionalPrometheusRules: null
+ prometheus-adapter:
+ enabled: true
+ prometheus:
+ url: http://rancher-monitoring-prometheus.cattle-monitoring-system.svc
+ port: 9090
+ image:
+ repository: rancher/directxman12-k8s-prometheus-adapter-amd64
+ tag: v0.7.0
+ pullPolicy: IfNotPresent
+ pullSecrets: {}
+ psp:
+ create: true
+ rkeControllerManager:
+ enabled: false
+ metricsPort: 10252
+ component: kube-controller-manager
+ clients:
+ port: 10011
+ useLocalhost: true
+ nodeSelector:
+ node-role.kubernetes.io/controlplane: "true"
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ rkeScheduler:
+ enabled: false
+ metricsPort: 10251
+ component: kube-scheduler
+ clients:
+ port: 10012
+ useLocalhost: true
+ nodeSelector:
+ node-role.kubernetes.io/controlplane: "true"
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ rkeProxy:
+ enabled: false
+ metricsPort: 10249
+ component: kube-proxy
+ clients:
+ port: 10013
+ useLocalhost: true
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ rkeEtcd:
+ enabled: false
+ metricsPort: 2379
+ component: kube-etcd
+ clients:
+ port: 10014
+ https:
+ enabled: true
+ certDir: /etc/kubernetes/ssl
+ certFile: kube-etcd-*.pem
+ keyFile: kube-etcd-*-key.pem
+ caCertFile: kube-ca.pem
+ nodeSelector:
+ node-role.kubernetes.io/etcd: "true"
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ k3sServer:
+ enabled: false
+ metricsPort: 10249
+ component: k3s-server
+ clients:
+ port: 10013
+ useLocalhost: true
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ kubeAdmControllerManager:
+ enabled: false
+ metricsPort: 10257
+ component: kube-controller-manager
+ clients:
+ port: 10011
+ useLocalhost: true
+ https:
+ enabled: true
+ useServiceAccountCredentials: true
+ insecureSkipVerify: true
+ nodeSelector:
+ node-role.kubernetes.io/master: ""
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ kubeAdmScheduler:
+ enabled: false
+ metricsPort: 10259
+ component: kube-scheduler
+ clients:
+ port: 10012
+ useLocalhost: true
+ https:
+ enabled: true
+ useServiceAccountCredentials: true
+ insecureSkipVerify: true
+ nodeSelector:
+ node-role.kubernetes.io/master: ""
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ kubeAdmProxy:
+ enabled: false
+ metricsPort: 10249
+ component: kube-proxy
+ clients:
+ port: 10013
+ useLocalhost: true
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ kubeAdmEtcd:
+ enabled: false
+ metricsPort: 2381
+ component: kube-etcd
+ clients:
+ port: 10014
+ useLocalhost: true
+ nodeSelector:
+ node-role.kubernetes.io/master: ""
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ rke2ControllerManager:
+ enabled: false
+ metricsPort: 10252
+ component: kube-controller-manager
+ clients:
+ port: 10011
+ useLocalhost: true
+ nodeSelector:
+ node-role.kubernetes.io/master: "true"
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ rke2Scheduler:
+ enabled: false
+ metricsPort: 10251
+ component: kube-scheduler
+ clients:
+ port: 10012
+ useLocalhost: true
+ nodeSelector:
+ node-role.kubernetes.io/master: "true"
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ rke2Proxy:
+ enabled: false
+ metricsPort: 10249
+ component: kube-proxy
+ clients:
+ port: 10013
+ useLocalhost: true
+ tolerations:
+ - effect: NoExecute
+ operator: Exists
+ - effect: NoSchedule
+ operator: Exists
+ rke2Etcd:
+ enabled: false
+ metricsPort: 2381
+ component: kube-etcd
+ clients:
+ port: 10014
+ useLocalhost: true
+ nodeSelector:
+ node-role.kubernetes.io/etcd: "true"
+ tolerations:
+ - effect: NoSchedule
+ key: node-role.kubernetes.io/master
+ operator: Equal
+ nameOverride: rancher-monitoring
+ namespaceOverride: cattle-monitoring-system
+ kubeTargetVersionOverride: ""
+ fullnameOverride: ""
+ commonLabels: {}
+ defaultRules:
+ create: true
+ rules:
+ alertmanager: true
+ etcd: true
+ general: true
+ k8s: true
+ kubeApiserver: true
+ kubeApiserverAvailability: true
+ kubeApiserverError: true
+ kubeApiserverSlos: true
+ kubelet: true
+ kubePrometheusGeneral: true
+ kubePrometheusNodeAlerting: true
+ kubePrometheusNodeRecording: true
+ kubernetesAbsent: true
+ kubernetesApps: true
+ kubernetesResources: true
+ kubernetesStorage: true
+ kubernetesSystem: true
+ kubeScheduler: true
+ kubeStateMetrics: true
+ network: true
+ node: true
+ prometheus: true
+ prometheusOperator: true
+ time: true
+ runbookUrl: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#
+ appNamespacesTarget: .*
+ labels: {}
+ annotations: {}
+ additionalPrometheusRules: []
+ global:
+ cattle:
+ systemDefaultRegistry: ""
+ kubectl:
+ repository: rancher/kubectl
+ tag: v1.18.6
+ pullPolicy: IfNotPresent
+ rbac:
+ create: true
+ userRoles:
+ create: true
+ aggregateToDefaultRoles: true
+ pspEnabled: true
+ pspAnnotations: {}
+ imagePullSecrets: []
alertmanager:
- alertmanagerSpec:
- additionalPeers: null
- affinity: {}
- configMaps: null
- containers: null
- externalUrl: null
- image:
- repository: rancher/prom-alertmanager
- sha: ""
- tag: v0.21.0
- listenLocal: false
- logFormat: logfmt
- logLevel: info
- nodeSelector: {}
- paused: false
- podAntiAffinity: ""
- podAntiAffinityTopologyKey: kubernetes.io/hostname
- podMetadata: {}
- portName: web
- priorityClassName: ""
- replicas: 1
- resources:
- limits:
- cpu: 1000m
- memory: 500Mi
- requests:
- cpu: 100m
- memory: 100Mi
- retention: 120h
- routePrefix: /
- secrets: null
- securityContext:
- fsGroup: 2000
- runAsGroup: 2000
- runAsNonRoot: true
- runAsUser: 1000
- storage: {}
- tolerations: null
- useExistingSecret: false
+ enabled: true
apiVersion: v2
+ serviceAccount:
+ create: true
+ name: ""
+ annotations: {}
+ podDisruptionBudget:
+ enabled: false
+ minAvailable: 1
+ maxUnavailable: ""
config:
global:
resolve_timeout: 5m
- receivers:
- - name: "null"
route:
group_by:
- job
- group_interval: 5m
group_wait: 30s
- receiver: "null"
+ group_interval: 5m
repeat_interval: 12h
+ receiver: "null"
routes:
- match:
alertname: Watchdog
receiver: "null"
+ receivers:
+ - name: "null"
templates:
- /etc/alertmanager/config/*.tmpl
- enabled: true
- ingress:
- annotations: {}
- enabled: false
- hosts: null
- labels: {}
- paths: null
- tls: null
- ingressPerReplica:
- annotations: {}
- enabled: false
- hostDomain: ""
- hostPrefix: ""
- labels: {}
- paths: null
- tlsSecretName: ""
- tlsSecretPerReplica:
- enabled: false
- prefix: alertmanager
- podDisruptionBudget:
- enabled: false
- maxUnavailable: ""
- minAvailable: 1
- secret:
- annotations: {}
- cleanupOnUninstall: false
- image:
- pullPolicy: IfNotPresent
- repository: rancher/rancher-agent
- tag: v2.4.8
- securityContext:
- runAsNonRoot: true
- runAsUser: 1000
- service:
- annotations: {}
- clusterIP: ""
- externalIPs: null
- labels: {}
- loadBalancerIP: ""
- loadBalancerSourceRanges: null
- nodePort: 30903
- port: 9093
- targetPort: 9093
- type: ClusterIP
- serviceAccount:
- annotations: {}
- create: true
- name: ""
- serviceMonitor:
- interval: ""
- metricRelabelings: null
- relabelings: null
- selfMonitor: true
- servicePerReplica:
- annotations: {}
- enabled: false
- loadBalancerSourceRanges: null
- nodePort: 30904
- port: 9093
- targetPort: 9093
- type: ClusterIP
+ tplConfig: false
templateFiles:
rancher_defaults.tmpl: |-
{{- define "slack.rancher.text" -}}
{{ template "rancher.text_multiple" . }}
{{- end -}}
{{- define "rancher.text_multiple" -}}
*[GROUP - Details]*
One or more alarms in this group have triggered a notification.
{{- if gt (len .GroupLabels.Values) 0 }}
*Group Labels:*
{{- range .GroupLabels.SortedPairs }}
• *{{ .Name }}:* `{{ .Value }}`
{{- end }}
{{- end }}
{{- if .ExternalURL }}
*Link to AlertManager:* {{ .ExternalURL }}
{{- end }}
{{- range .Alerts }}
{{ template "rancher.text_single" . }}
{{- end }}
{{- end -}}
{{- define "rancher.text_single" -}}
{{- if .Labels.alertname }}
*[ALERT - {{ .Labels.alertname }}]*
{{- else }}
*[ALERT]*
{{- end }}
{{- if .Labels.severity }}
*Severity:* `{{ .Labels.severity }}`
{{- end }}
{{- if .Labels.cluster }}
*Cluster:* {{ .Labels.cluster }}
{{- end }}
{{- if .Annotations.summary }}
*Summary:* {{ .Annotations.summary }}
{{- end }}
{{- if .Annotations.message }}
*Message:* {{ .Annotations.message }}
{{- end }}
{{- if .Annotations.description }}
*Description:* {{ .Annotations.description }}
{{- end }}
{{- if .Annotations.runbook_url }}
*Runbook URL:* <{{ .Annotations.runbook_url }}|:spiral_note_pad:>
{{- end }}
{{- with .Labels }}
{{- with .Remove (stringSlice "alertname" "severity" "cluster") }}
{{- if gt (len .) 0 }}
*Additional Labels:*
{{- range .SortedPairs }}
• *{{ .Name }}:* `{{ .Value }}`
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- with .Annotations }}
{{- with .Remove (stringSlice "summary" "message" "description" "runbook_url") }}
{{- if gt (len .) 0 }}
*Additional Annotations:*
{{- range .SortedPairs }}
• *{{ .Name }}:* `{{ .Value }}`
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end -}}
- tplConfig: false
- commonLabels: {}
- coreDns:
- enabled: true
+ ingress:
+ enabled: false
+ annotations: {}
+ labels: {}
+ hosts: []
+ paths: []
+ tls: []
+ secret:
+ cleanupOnUninstall: false
+ image:
+ repository: rancher/rancher-agent
+ tag: v2.4.8
+ pullPolicy: IfNotPresent
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: 1000
+ annotations: {}
+ ingressPerReplica:
+ enabled: false
+ annotations: {}
+ labels: {}
+ hostPrefix: ""
+ hostDomain: ""
+ paths: []
+ tlsSecretName: ""
+ tlsSecretPerReplica:
+ enabled: false
+ prefix: alertmanager
service:
- port: 9153
- targetPort: 9153
+ annotations: {}
+ labels: {}
+ clusterIP: ""
+ port: 9093
+ targetPort: 9093
+ nodePort: 30903
+ externalIPs: []
+ loadBalancerIP: ""
+ loadBalancerSourceRanges: []
+ type: ClusterIP
+ servicePerReplica:
+ enabled: false
+ annotations: {}
+ port: 9093
+ targetPort: 9093
+ nodePort: 30904
+ loadBalancerSourceRanges: []
+ type: ClusterIP
serviceMonitor:
interval: ""
- metricRelabelings: null
- relabelings: null
- defaultRules:
- annotations: {}
- appNamespacesTarget: .*
- create: true
- labels: {}
- rules:
- alertmanager: true
- etcd: true
- general: true
- k8s: true
- kubeApiserver: true
- kubeApiserverAvailability: true
- kubeApiserverError: true
- kubeApiserverSlos: true
- kubePrometheusGeneral: true
- kubePrometheusNodeAlerting: true
- kubePrometheusNodeRecording: true
- kubeScheduler: true
- kubeStateMetrics: true
- kubelet: true
- kubernetesAbsent: true
- kubernetesApps: true
- kubernetesResources: true
- kubernetesStorage: true
- kubernetesSystem: true
- network: true
- node: true
- prometheus: true
- prometheusOperator: true
- time: true
- runbookUrl: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#
- fullnameOverride: ""
- global:
- cattle:
- clusterId: c-abcde
- clusterName: k8s-gke-dev
- systemDefaultRegistry: ""
- imagePullSecrets: null
- kubectl:
- pullPolicy: IfNotPresent
- repository: rancher/kubectl
- tag: v1.18.6
- rbac:
- create: true
- pspAnnotations: {}
- pspEnabled: true
- userRoles:
- aggregateToDefaultRoles: true
- create: true
+ selfMonitor: true
+ metricRelabelings: []
+ relabelings: []
+ alertmanagerSpec:
+ podMetadata: {}
+ image:
+ repository: rancher/prom-alertmanager
+ tag: v0.21.0
+ sha: ""
+ useExistingSecret: false
+ secrets: []
+ configMaps: []
+ logFormat: logfmt
+ logLevel: info
+ replicas: 1
+ retention: 120h
+ storage: {}
+ externalUrl: null
+ routePrefix: /
+ paused: false
+ nodeSelector: {}
+ resources:
+ limits:
+ memory: 500Mi
+ cpu: 1000m
+ requests:
+ memory: 100Mi
+ cpu: 100m
+ podAntiAffinity: ""
+ podAntiAffinityTopologyKey: kubernetes.io/hostname
+ affinity: {}
+ tolerations: []
+ securityContext:
+ runAsGroup: 2000
+ runAsNonRoot: true
+ runAsUser: 1000
+ fsGroup: 2000
+ listenLocal: false
+ containers: []
+ priorityClassName: ""
+ additionalPeers: []
+ portName: web
grafana:
- additionalDataSources: null
- adminPassword: prom-operator
- defaultDashboardsEnabled: true
+ enabled: true
+ namespaceOverride: ""
+ grafana.ini:
+ users:
+ auto_assign_org_role: Viewer
+ auth:
+ disable_login_form: false
+ auth.anonymous:
+ enabled: true
+ org_role: Viewer
+ auth.basic:
+ enabled: false
+ dashboards:
+ default_home_dashboard_path: /tmp/dashboards/rancher-default-home.json
deploymentStrategy:
type: Recreate
- enabled: true
- extraConfigmapMounts: null
- extraContainerVolumes:
- - emptyDir: {}
- name: nginx-home
- - configMap:
- items:
- - key: nginx.conf
- mode: 438
- path: nginx.conf
- name: grafana-nginx-proxy-config
- name: grafana-nginx
+ defaultDashboardsEnabled: true
+ adminPassword: prom-operator
+ ingress:
+ enabled: false
+ annotations: {}
+ labels: {}
+ hosts: []
+ path: /
+ tls: []
+ sidecar:
+ dashboards:
+ enabled: true
+ label: grafana_dashboard
+ searchNamespace: cattle-dashboards
+ annotations: {}
+ datasources:
+ enabled: true
+ defaultDatasourceEnabled: true
+ annotations: {}
+ createPrometheusReplicasDatasources: false
+ label: grafana_datasource
+ extraConfigmapMounts: []
+ additionalDataSources: []
+ service:
+ portName: nginx-http
+ port: 80
+ targetPort: 8080
+ nodePort: 30950
+ type: ClusterIP
+ proxy:
+ image:
+ repository: rancher/library-nginx
+ tag: 1.19.2-alpine
extraContainers: |
- name: grafana-proxy
args:
- nginx
- -g
- daemon off;
- -c
- /nginx/nginx.conf
image: "{{ template "system_default_registry" . }}{{ .Values.proxy.image.repository }}:{{ .Values.proxy.image.tag }}"
ports:
- containerPort: 8080
name: nginx-http
protocol: TCP
volumeMounts:
- mountPath: /nginx
name: grafana-nginx
- mountPath: /var/cache/nginx
name: nginx-home
securityContext:
runAsUser: 101
runAsGroup: 101
- grafana.ini:
- auth:
- disable_login_form: false
- auth.anonymous:
- enabled: true
- org_role: Viewer
- auth.basic:
- enabled: false
- dashboards:
- default_home_dashboard_path: /tmp/dashboards/rancher-default-home.json
- users:
- auto_assign_org_role: Viewer
- ingress:
- annotations: {}
- enabled: false
- hosts: null
- labels: {}
- path: /
- tls: null
- namespaceOverride: ""
- proxy:
- image:
- repository: rancher/library-nginx
- tag: 1.19.2-alpine
- resources:
- limits:
- cpu: 200m
- memory: 200Mi
- requests:
- cpu: 100m
- memory: 100Mi
- service:
- nodePort: 30950
- port: 80
- portName: nginx-http
- targetPort: 8080
- type: ClusterIP
+ extraContainerVolumes:
+ - name: nginx-home
+ emptyDir: {}
+ - name: grafana-nginx
+ configMap:
+ name: grafana-nginx-proxy-config
+ items:
+ - key: nginx.conf
+ mode: 438
+ path: nginx.conf
serviceMonitor:
interval: ""
- metricRelabelings: null
- relabelings: null
selfMonitor: true
- sidecar:
- dashboards:
- annotations: {}
- enabled: true
- label: grafana_dashboard
- searchNamespace: cattle-dashboards
- datasources:
- annotations: {}
- createPrometheusReplicasDatasources: false
- defaultDatasourceEnabled: true
- enabled: true
- label: grafana_datasource
- k3sServer:
- clients:
- port: 10013
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: k3s-server
- enabled: false
- metricsPort: 10249
- kube-state-metrics:
- namespaceOverride: ""
- podSecurityPolicy:
- enabled: true
- rbac:
- create: true
+ metricRelabelings: []
+ relabelings: []
resources:
limits:
- cpu: 100m
memory: 200Mi
+ cpu: 200m
requests:
+ memory: 100Mi
cpu: 100m
- memory: 130Mi
- kubeAdmControllerManager:
- clients:
- https:
- enabled: true
- insecureSkipVerify: true
- useServiceAccountCredentials: true
- nodeSelector:
- node-role.kubernetes.io/master: ""
- port: 10011
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-controller-manager
- enabled: false
- metricsPort: 10257
- kubeAdmEtcd:
- clients:
- nodeSelector:
- node-role.kubernetes.io/master: ""
- port: 10014
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-etcd
- enabled: false
- metricsPort: 2381
- kubeAdmProxy:
- clients:
- port: 10013
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-proxy
- enabled: false
- metricsPort: 10249
- kubeAdmScheduler:
- clients:
- https:
- enabled: true
- insecureSkipVerify: true
- useServiceAccountCredentials: true
- nodeSelector:
- node-role.kubernetes.io/master: ""
- port: 10012
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-scheduler
- enabled: false
- metricsPort: 10259
kubeApiServer:
enabled: true
- relabelings: null
+ tlsConfig:
+ serverName: kubernetes
+ insecureSkipVerify: false
+ relabelings: []
serviceMonitor:
interval: ""
jobLabel: component
- metricRelabelings: null
selector:
matchLabels:
component: apiserver
provider: kubernetes
- tlsConfig:
- insecureSkipVerify: false
- serverName: kubernetes
+ metricRelabelings: []
+ kubelet:
+ enabled: true
+ namespace: kube-system
+ serviceMonitor:
+ interval: ""
+ https: true
+ cAdvisor: true
+ probes: true
+ resource: true
+ resourcePath: /metrics/resource/v1alpha1
+ cAdvisorMetricRelabelings: []
+ probesMetricRelabelings: []
+ cAdvisorRelabelings:
+ - sourceLabels:
+ - __metrics_path__
+ targetLabel: metrics_path
+ probesRelabelings:
+ - sourceLabels:
+ - __metrics_path__
+ targetLabel: metrics_path
+ resourceRelabelings:
+ - sourceLabels:
+ - __metrics_path__
+ targetLabel: metrics_path
+ metricRelabelings: []
+ relabelings:
+ - sourceLabels:
+ - __metrics_path__
+ targetLabel: metrics_path
kubeControllerManager:
enabled: false
- endpoints: null
+ endpoints: []
service:
port: 10252
targetPort: 10252
serviceMonitor:
+ interval: ""
https: false
insecureSkipVerify: null
- interval: ""
- metricRelabelings: null
- relabelings: null
serverName: null
+ metricRelabelings: []
+ relabelings: []
+ coreDns:
+ enabled: true
+ service:
+ port: 9153
+ targetPort: 9153
+ serviceMonitor:
+ interval: ""
+ metricRelabelings: []
+ relabelings: []
kubeDns:
enabled: false
service:
dnsmasq:
port: 10054
targetPort: 10054
skydns:
port: 10055
targetPort: 10055
serviceMonitor:
- dnsmasqMetricRelabelings: null
- dnsmasqRelabelings: null
interval: ""
- metricRelabelings: null
- relabelings: null
+ metricRelabelings: []
+ relabelings: []
+ dnsmasqMetricRelabelings: []
+ dnsmasqRelabelings: []
kubeEtcd:
enabled: false
- endpoints: null
+ endpoints: []
service:
port: 2379
targetPort: 2379
serviceMonitor:
- caFile: ""
- certFile: ""
- insecureSkipVerify: false
interval: ""
- keyFile: ""
- metricRelabelings: null
- relabelings: null
scheme: http
+ insecureSkipVerify: false
serverName: ""
- kubeProxy:
- enabled: false
- endpoints: null
- service:
- port: 10249
- targetPort: 10249
- serviceMonitor:
- https: false
- interval: ""
- metricRelabelings: null
- relabelings: null
+ caFile: ""
+ certFile: ""
+ keyFile: ""
+ metricRelabelings: []
+ relabelings: []
kubeScheduler:
enabled: false
- endpoints: null
+ endpoints: []
service:
port: 10251
targetPort: 10251
serviceMonitor:
+ interval: ""
https: false
insecureSkipVerify: null
- interval: ""
- metricRelabelings: null
- relabelings: null
serverName: null
- kubeStateMetrics:
- enabled: true
+ metricRelabelings: []
+ relabelings: []
+ kubeProxy:
+ enabled: false
+ endpoints: []
+ service:
+ port: 10249
+ targetPort: 10249
serviceMonitor:
interval: ""
- metricRelabelings: null
- relabelings: null
- kubeTargetVersionOverride: ""
- kubelet:
+ https: false
+ metricRelabelings: []
+ relabelings: []
+ kubeStateMetrics:
enabled: true
- namespace: kube-system
serviceMonitor:
- cAdvisor: true
- cAdvisorMetricRelabelings: null
- cAdvisorRelabelings:
- - sourceLabels:
- - __metrics_path__
- targetLabel: metrics_path
- https: true
interval: ""
- metricRelabelings: null
- probes: true
- probesMetricRelabelings: null
- probesRelabelings:
- - sourceLabels:
- - __metrics_path__
- targetLabel: metrics_path
- relabelings:
- - sourceLabels:
- - __metrics_path__
- targetLabel: metrics_path
- resource: true
- resourcePath: /metrics/resource/v1alpha1
- resourceRelabelings:
- - sourceLabels:
- - __metrics_path__
- targetLabel: metrics_path
- nameOverride: rancher-monitoring
- namespaceOverride: cattle-monitoring-system
+ metricRelabelings: []
+ relabelings: []
+ kube-state-metrics:
+ namespaceOverride: ""
+ rbac:
+ create: true
+ podSecurityPolicy:
+ enabled: true
+ resources:
+ limits:
+ cpu: 100m
+ memory: 200Mi
+ requests:
+ cpu: 100m
+ memory: 130Mi
nodeExporter:
enabled: true
jobLabel: jobLabel
serviceMonitor:
interval: ""
- metricRelabelings: null
- relabelings: null
scrapeTimeout: ""
- prometheus:
- additionalPodMonitors: null
- additionalServiceMonitors: null
- annotations: {}
- enabled: true
- ingress:
- annotations: {}
- enabled: false
- hosts: null
- labels: {}
- paths: null
- tls: null
- ingressPerReplica:
- annotations: {}
- enabled: false
- hostDomain: ""
- hostPrefix: ""
- labels: {}
- paths: null
- tlsSecretName: ""
- tlsSecretPerReplica:
- enabled: false
- prefix: prometheus
- podDisruptionBudget:
- enabled: false
- maxUnavailable: ""
- minAvailable: 1
- podSecurityPolicy:
- allowedCapabilities: null
- prometheusSpec:
- additionalAlertManagerConfigs: null
- additionalAlertRelabelConfigs: null
- additionalPrometheusSecretsAnnotations: {}
- additionalScrapeConfigs: null
- additionalScrapeConfigsSecret: {}
- affinity: {}
- alertingEndpoints: null
- apiserverConfig: {}
- configMaps: null
- containers: |
- - name: prometheus-proxy
- args:
- - nginx
- - -g
- - daemon off;
- - -c
- - /nginx/nginx.conf
- image: "{{ template "system_default_registry" . }}{{ .Values.prometheus.prometheusSpec.proxy.image.repository }}:{{ .Values.prometheus.prometheusSpec.proxy.image.tag }}"
- ports:
- - containerPort: 8080
- name: nginx-http
- protocol: TCP
- volumeMounts:
- - mountPath: /nginx
- name: prometheus-nginx
- - mountPath: /var/cache/nginx
- name: nginx-home
- securityContext:
- runAsUser: 101
- runAsGroup: 101
- disableCompaction: false
- enableAdminAPI: false
- evaluationInterval: ""
- externalLabels: {}
- externalUrl: ""
- ignoreNamespaceSelectors: false
- image:
- repository: rancher/prom-prometheus
- sha: ""
- tag: v2.18.2
- initContainers: null
- listenLocal: false
- logFormat: logfmt
- logLevel: info
- nodeSelector: {}
- paused: false
- podAntiAffinity: ""
- podAntiAffinityTopologyKey: kubernetes.io/hostname
- podMetadata: {}
- podMonitorNamespaceSelector: {}
- podMonitorSelector: {}
- podMonitorSelectorNilUsesHelmValues: false
- portName: nginx-http
- priorityClassName: ""
- prometheusExternalLabelName: ""
- prometheusExternalLabelNameClear: false
- proxy:
- image:
- repository: rancher/library-nginx
- tag: 1.19.2-alpine
- query: {}
- remoteRead: null
- remoteWrite: null
- remoteWriteDashboards: false
- replicaExternalLabelName: ""
- replicaExternalLabelNameClear: false
- replicas: 1
- resources:
- limits:
- cpu: 1000m
- memory: 1500Mi
- requests:
- cpu: 750m
- memory: 750Mi
- retention: 10d
- retentionSize: ""
- routePrefix: /
- ruleNamespaceSelector: {}
- ruleSelector: {}
- ruleSelectorNilUsesHelmValues: false
- scrapeInterval: ""
- secrets: null
- securityContext:
- fsGroup: 2000
- runAsGroup: 2000
- runAsNonRoot: true
- runAsUser: 1000
- serviceMonitorNamespaceSelector: {}
- serviceMonitorSelector: {}
- serviceMonitorSelectorNilUsesHelmValues: false
- storageSpec: {}
- thanos: {}
- tolerations: null
- volumeMounts: null
- volumes:
- - emptyDir: {}
- name: nginx-home
- - configMap:
- defaultMode: 438
- name: prometheus-nginx-proxy-config
- name: prometheus-nginx
- walCompression: false
- service:
- annotations: {}
- clusterIP: ""
- externalIPs: null
- labels: {}
- loadBalancerIP: ""
- loadBalancerSourceRanges: null
- nodePort: 30090
- port: 9090
- sessionAffinity: ""
- targetPort: 8080
- type: ClusterIP
- serviceAccount:
- create: true
- name: ""
- serviceMonitor:
- bearerTokenFile: null
- interval: ""
- metricRelabelings: null
- relabelings: null
- scheme: ""
- selfMonitor: true
- tlsConfig: {}
- servicePerReplica:
- annotations: {}
- enabled: false
- loadBalancerSourceRanges: null
- nodePort: 30091
- port: 9090
- targetPort: 9090
- type: ClusterIP
- thanosIngress:
- annotations: {}
- enabled: false
- hosts: null
- labels: {}
- paths: null
- servicePort: 10901
- tls: null
- prometheus-adapter:
- enabled: true
- image:
- pullPolicy: IfNotPresent
- pullSecrets: {}
- repository: rancher/directxman12-k8s-prometheus-adapter-amd64
- tag: v0.7.0
- prometheus:
- port: 9090
- url: http://rancher-monitoring-prometheus.cattle-monitoring-system.svc
- psp:
- create: true
+ metricRelabelings: []
+ relabelings: []
prometheus-node-exporter:
- extraArgs:
- - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
- - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
namespaceOverride: ""
podLabels:
jobLabel: node-exporter
+ extraArgs:
+ - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
+ - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
+ service:
+ port: 9796
+ targetPort: 9796
resources:
limits:
cpu: 200m
memory: 50Mi
requests:
cpu: 100m
memory: 30Mi
- service:
- port: 9796
- targetPort: 9796
prometheusOperator:
- admissionWebhooks:
+ enabled: true
+ manageCrds: true
+ tlsProxy:
enabled: true
+ image:
+ repository: rancher/squareup-ghostunnel
+ tag: v1.5.2
+ sha: ""
+ pullPolicy: IfNotPresent
+ resources: {}
+ admissionWebhooks:
failurePolicy: Fail
+ enabled: true
patch:
- affinity: {}
enabled: true
image:
- pullPolicy: IfNotPresent
repository: rancher/jettech-kube-webhook-certgen
- sha: ""
tag: v1.2.1
- nodeSelector: {}
- podAnnotations: {}
- priorityClassName: ""
+ sha: ""
+ pullPolicy: IfNotPresent
resources: {}
- tolerations: null
- affinity: {}
- cleanupCustomResource: false
- configReloaderCpu: 100m
- configReloaderMemory: 25Mi
- configmapReloadImage:
- repository: rancher/jimmidyson-configmap-reload
- sha: ""
- tag: v0.3.0
+ priorityClassName: ""
+ podAnnotations: {}
+ nodeSelector: {}
+ affinity: {}
+ tolerations: []
+ namespaces: {}
+ denyNamespaces: []
+ serviceAccount:
+ create: true
+ name: ""
+ service:
+ annotations: {}
+ labels: {}
+ clusterIP: ""
+ nodePort: 30080
+ nodePortTls: 30443
+ additionalPorts: []
+ loadBalancerIP: ""
+ loadBalancerSourceRanges: []
+ type: ClusterIP
+ externalIPs: []
createCustomResource: true
- denyNamespaces: null
- enabled: true
- hostNetwork: false
- image:
- pullPolicy: IfNotPresent
- repository: rancher/coreos-prometheus-operator
- sha: ""
- tag: v0.38.1
+ cleanupCustomResource: false
+ podLabels: {}
+ podAnnotations: {}
kubeletService:
enabled: true
namespace: kube-system
- manageCrds: true
- namespaces: {}
- nodeSelector: {}
- podAnnotations: {}
- podLabels: {}
- prometheusConfigReloaderImage:
- repository: rancher/coreos-prometheus-config-reloader
- sha: ""
- tag: v0.38.1
+ serviceMonitor:
+ interval: ""
+ scrapeTimeout: ""
+ selfMonitor: true
+ metricRelabelings: []
+ relabelings: []
resources:
limits:
cpu: 200m
memory: 500Mi
requests:
cpu: 100m
memory: 100Mi
- secretFieldSelector: ""
+ hostNetwork: false
+ nodeSelector: {}
+ tolerations: []
+ affinity: {}
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
+ image:
+ repository: rancher/coreos-prometheus-operator
+ tag: v0.38.1
+ sha: ""
+ pullPolicy: IfNotPresent
+ configmapReloadImage:
+ repository: rancher/jimmidyson-configmap-reload
+ tag: v0.3.0
+ sha: ""
+ prometheusConfigReloaderImage:
+ repository: rancher/coreos-prometheus-config-reloader
+ tag: v0.38.1
+ sha: ""
+ configReloaderCpu: 100m
+ configReloaderMemory: 25Mi
+ secretFieldSelector: ""
+ prometheus:
+ enabled: true
+ annotations: {}
+ serviceAccount:
+ create: true
+ name: ""
service:
- additionalPorts: null
annotations: {}
- clusterIP: ""
- externalIPs: null
labels: {}
+ clusterIP: ""
+ port: 9090
+ targetPort: 8080
+ externalIPs: []
+ nodePort: 30090
loadBalancerIP: ""
- loadBalancerSourceRanges: null
- nodePort: 30080
- nodePortTls: 30443
+ loadBalancerSourceRanges: []
type: ClusterIP
- serviceAccount:
- create: true
- name: ""
+ sessionAffinity: ""
+ servicePerReplica:
+ enabled: false
+ annotations: {}
+ port: 9090
+ targetPort: 9090
+ nodePort: 30091
+ loadBalancerSourceRanges: []
+ type: ClusterIP
+ podDisruptionBudget:
+ enabled: false
+ minAvailable: 1
+ maxUnavailable: ""
+ thanosIngress:
+ enabled: false
+ annotations: {}
+ labels: {}
+ servicePort: 10901
+ hosts: []
+ paths: []
+ tls: []
+ ingress:
+ enabled: false
+ annotations: {}
+ labels: {}
+ hosts: []
+ paths: []
+ tls: []
+ ingressPerReplica:
+ enabled: false
+ annotations: {}
+ labels: {}
+ hostPrefix: ""
+ hostDomain: ""
+ paths: []
+ tlsSecretName: ""
+ tlsSecretPerReplica:
+ enabled: false
+ prefix: prometheus
+ podSecurityPolicy:
+ allowedCapabilities: []
serviceMonitor:
interval: ""
- metricRelabelings: null
- relabelings: null
- scrapeTimeout: ""
selfMonitor: true
- tlsProxy:
- enabled: true
+ scheme: ""
+ tlsConfig: {}
+ bearerTokenFile: null
+ metricRelabelings: []
+ relabelings: []
+ prometheusSpec:
+ disableCompaction: false
+ apiserverConfig: {}
+ scrapeInterval: ""
+ evaluationInterval: ""
+ listenLocal: false
+ enableAdminAPI: false
image:
- pullPolicy: IfNotPresent
- repository: rancher/squareup-ghostunnel
+ repository: rancher/prom-prometheus
+ tag: v2.18.2
sha: ""
- tag: v1.5.2
- resources: {}
- tolerations: null
- rke2ControllerManager:
- clients:
- nodeSelector:
- node-role.kubernetes.io/master: "true"
- port: 10011
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-controller-manager
- enabled: false
- metricsPort: 10252
- rke2Etcd:
- clients:
- nodeSelector:
- node-role.kubernetes.io/etcd: "true"
- port: 10014
- tolerations:
- - effect: NoSchedule
- key: node-role.kubernetes.io/master
- operator: Equal
- useLocalhost: true
- component: kube-etcd
- enabled: false
- metricsPort: 2381
- rke2Proxy:
- clients:
- port: 10013
- useLocalhost: true
- component: kube-proxy
- enabled: false
- metricsPort: 10249
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- rke2Scheduler:
- clients:
- nodeSelector:
- node-role.kubernetes.io/master: "true"
- port: 10012
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-scheduler
- enabled: false
- metricsPort: 10251
- rkeControllerManager:
- clients:
- nodeSelector:
- node-role.kubernetes.io/controlplane: "true"
- port: 10011
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-controller-manager
- enabled: false
- metricsPort: 10252
- rkeEtcd:
- clients:
- https:
- caCertFile: kube-ca.pem
- certDir: /etc/kubernetes/ssl
- certFile: kube-etcd-*.pem
- enabled: true
- keyFile: kube-etcd-*-key.pem
- nodeSelector:
- node-role.kubernetes.io/etcd: "true"
- port: 10014
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- component: kube-etcd
- enabled: false
- metricsPort: 2379
- rkeProxy:
- clients:
- port: 10013
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-proxy
- enabled: false
- metricsPort: 10249
- rkeScheduler:
- clients:
- nodeSelector:
- node-role.kubernetes.io/controlplane: "true"
- port: 10012
- tolerations:
- - effect: NoExecute
- operator: Exists
- - effect: NoSchedule
- operator: Exists
- useLocalhost: true
- component: kube-scheduler
- enabled: false
- metricsPort: 10251
+ tolerations: []
+ alertingEndpoints: []
+ externalLabels: {}
+ replicaExternalLabelName: ""
+ replicaExternalLabelNameClear: false
+ prometheusExternalLabelName: ""
+ prometheusExternalLabelNameClear: false
+ externalUrl: ""
+ ignoreNamespaceSelectors: false
+ nodeSelector: {}
+ secrets: []
+ configMaps: []
+ query: {}
+ ruleNamespaceSelector: {}
+ ruleSelectorNilUsesHelmValues: false
+ ruleSelector: {}
+ serviceMonitorSelectorNilUsesHelmValues: false
+ serviceMonitorSelector: {}
+ serviceMonitorNamespaceSelector: {}
+ podMonitorSelectorNilUsesHelmValues: false
+ podMonitorSelector: {}
+ podMonitorNamespaceSelector: {}
+ retention: 10d
+ retentionSize: ""
+ walCompression: false
+ paused: false
+ replicas: 1
+ logLevel: info
+ logFormat: logfmt
+ routePrefix: /
+ podMetadata: {}
+ podAntiAffinity: ""
+ podAntiAffinityTopologyKey: kubernetes.io/hostname
+ affinity: {}
+ remoteRead: []
+ remoteWrite: []
+ remoteWriteDashboards: false
+ resources:
+ limits:
+ memory: 1500Mi
+ cpu: 1000m
+ requests:
+ memory: 750Mi
+ cpu: 750m
+ storageSpec: {}
+ additionalScrapeConfigs: []
+ additionalScrapeConfigsSecret: {}
+ additionalPrometheusSecretsAnnotations: {}
+ additionalAlertManagerConfigs: []
+ additionalAlertRelabelConfigs: []
+ securityContext:
+ runAsGroup: 2000
+ runAsNonRoot: true
+ runAsUser: 1000
+ fsGroup: 2000
+ priorityClassName: ""
+ thanos: {}
+ proxy:
+ image:
+ repository: rancher/library-nginx
+ tag: 1.19.2-alpine
+ containers: |
+ - name: prometheus-proxy
+ args:
+ - nginx
+ - -g
+ - daemon off;
+ - -c
+ - /nginx/nginx.conf
+ image: "{{ template "system_default_registry" . }}{{ .Values.prometheus.prometheusSpec.proxy.image.repository }}:{{ .Values.prometheus.prometheusSpec.proxy.image.tag }}"
+ ports:
+ - containerPort: 8080
+ name: nginx-http
+ protocol: TCP
+ volumeMounts:
+ - mountPath: /nginx
+ name: prometheus-nginx
+ - mountPath: /var/cache/nginx
+ name: nginx-home
+ securityContext:
+ runAsUser: 101
+ runAsGroup: 101
+ volumes:
+ - name: nginx-home
+ emptyDir: {}
+ - name: prometheus-nginx
+ configMap:
+ name: prometheus-nginx-proxy-config
+ defaultMode: 438
+ volumeMounts: []
+ initContainers: []
+ portName: nginx-http
+ additionalServiceMonitors: []
+ additionalPodMonitors: []
EOT
# (13 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
rancher2_app_v2.dev_monitoring: Modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring]
rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 10s elapsed]
rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 20s elapsed]
rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 30s elapsed]
rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 40s elapsed]
rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 50s elapsed]
rancher2_app_v2.dev_monitoring: Modifications complete after 57s [id=c-abcde.cattle-monitoring-system/rancher-monitoring]
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
I cant explain the behaviour in my side. Is the values attribute really intended to provide a values.yml replacement ? is it for yaml overlay (only the values we want changed from the original one) ?
Originally posted by @p4ranoidandro1d in https://github.com/rancher/terraform-provider-rancher2/issues/500#issuecomment-772436658
SURE-4490
@p4ranoidandro1d , values
argument should be intended to provide a value.yaml replacement. Not needed to fully explain the behaviour on your side, but in order to reproduce the issue, could you please explain what values are always set as updated??
Hello @rawmind0
There's a whole set of yaml branches being updated, and to be honest, i cant find the logic behind.
A few of them, that could be explained by the processing of some empty list or maps and the null "value".
For example, there's a 1st level global: block: With keys being removed....
- global:
- cattle:
- clusterId: c-abcde < removed
- clusterName: k8s-gke-dev < removed
- systemDefaultRegistry: ""
- imagePullSecrets: null < changed to []
- kubectl:
- pullPolicy: IfNotPresent
- repository: rancher/kubectl
- tag: v1.18.6
- rbac:
- create: true
- pspAnnotations: {}
- pspEnabled: true
- userRoles:
- aggregateToDefaultRoles: true
- create: true
+ global:
+ cattle:
+ systemDefaultRegistry: ""
+ kubectl:
+ repository: rancher/kubectl
+ tag: v1.18.6
+ pullPolicy: IfNotPresent
+ rbac:
+ create: true
+ userRoles:
+ create: true
+ aggregateToDefaultRoles: true
+ pspEnabled: true
+ pspAnnotations: {}
+ imagePullSecrets: []
or this one being treated as updated, but there's no change:
rke2Etcd:
- clients:
- nodeSelector:
- node-role.kubernetes.io/etcd: "true"
- port: 10014
- tolerations:
- - effect: NoSchedule
- key: node-role.kubernetes.io/master
- operator: Equal
- useLocalhost: true
- component: kube-etcd
- enabled: false
- metricsPort: 2381
+ enabled: false
+ metricsPort: 2381
+ component: kube-etcd
+ clients:
+ port: 10014
+ useLocalhost: true
+ nodeSelector:
+ node-role.kubernetes.io/etcd: "true"
+ tolerations:
+ - effect: NoSchedule
+ key: node-role.kubernetes.io/master
+ operator: Equal
It must be a problem for everyone who use rancher_app_v2 resource. Actual values.yaml in rancher new-style app are formatted on Rancher side and obviously don't match as 1-to-1 string. However actual data in values.yaml is the same.
# module.eks_imported.rancher2_app_v2.vault-secrets-webhook[0] will be updated in-place
~ resource "rancher2_app_v2" "vault-secrets-webhook" {
id = "c-2g74h.vault-operator/vault-secrets-webhook"
name = "vault-secrets-webhook"
~ values = <<-EOT
- affinity:
- podAntiAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- - labelSelector:
- matchExpressions:
- - key: app.kubernetes.io/instance
- operator: In
- values:
- - vault-secrets-webhook
- topologyKey: kubernetes.io/hostname
- apiSideEffectValue: NoneOnDryRun
- configMapMutation: false
- configmapFailurePolicy: Ignore
- customResourceMutations: null
- customResourcesFailurePolicy: Ignore
+ replicaCount: 2
debug: true
- env:
- VAULT_IMAGE: docker.innowatts.net/cache/library/vault:1.6.0
- global:
- cattle:
- clusterId: c-2g74h
- clusterName: nonprod2-useast1-cluster1
- namespaceSelector:
- matchExpressions:
- - key: security.banzaicloud.io/mutate
- operator: NotIn
- values:
- - skip
- - key: field.cattle.io/projectId
- operator: NotIn
- values:
- - p-zlfdn
- podDisruptionBudget:
- enabled: true
- minAvailable: 1
- podsFailurePolicy: Fail
- priorityClassName: system-cluster-critical
rbac:
enabled: true
psp:
enabled: false
- replicaCount: 2
+
+ vaultEnv:
+ repository: docker.innowatts.net/cache/banzaicloud/vault-env
+ env:
+ VAULT_IMAGE: docker.innowatts.net/cache/library/vault:1.6.0
+
+ service:
+ name: vault-secrets-webhook
+ type: ClusterIP
+ externalPort: 443
+ internalPort: 8443
+
resources:
limits:
cpu: 100m
memory: 64Mi
requests:
cpu: 40m
memory: 32Mi
+ customResourceMutations: []
+ customResourcesFailurePolicy: Ignore
+ configMapMutation: false
+ configmapFailurePolicy: Ignore
+ podsFailurePolicy: Fail
secretsFailurePolicy: Ignore
- service:
- externalPort: 443
- internalPort: 8443
- name: vault-secrets-webhook
- type: ClusterIP
- vaultEnv:
- repository: docker.innowatts.net/cache/banzaicloud/vault-env
+ apiSideEffectValue: NoneOnDryRun
+ namespaceSelector:
+ matchExpressions:
+ - key: security.banzaicloud.io/mutate
+ operator: NotIn
+ values:
+ - skip
+ - key: field.cattle.io/projectId
+ operator: NotIn
+ values:
+ - p-zlfdn
+ podDisruptionBudget:
+ enabled: true
+ minAvailable: 1
+ priorityClassName: system-cluster-critical
+ affinity:
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchExpressions:
+ - key: app.kubernetes.io/instance
+ operator: In
+ values:
+ - vault-secrets-webhook
+ topologyKey: "kubernetes.io/hostname"
EOT
# (14 unchanged attributes hidden)
}
It seems that this would make the rancher2_app_v2
resource unusable in practice. Are folks in this thread using another terraform provider to deploy their helm charts as a workaround?
We use the official Helm provider, which has two advantages over the rancher2_app_v2
resource in my opinion:
- the Helm chart is downloaded by the VM on which Terraform is executed, not from the cluster nodes themselves;
- the official Helm provider allows you to set values directly, allowing you to do string interpolation for example.
I have the same issue, it's very annoying. The YAML gets reordered based on alphabetic sorting (I think), makes it kinda unusable
The rancher2_app_v2
values comparison is not done as string. The keys order doesn't matter as the yaml content is unmarshaled to a map[string]
before reflect.DeepEqual
comparison, https://github.com/rancher/terraform-provider-rancher2/blob/master/rancher2/schema_app_v2.go#L102 .
Agreed that the visualization of the change is confuse, but it's a terraform stuff due to it's showing diff from string type arguments.
@p4ranoidandro1d , global.cattle.cluster*
data is injected by the provider and supressed from the diff, beside that in your case, the diff trigger seems to be with imagePullSecrets: null
-> imagePullSecrets: []
. I'm guessing imagePullSecrets
is not defined in your tf file, is it?? If not defined, try adding it as imagePullSecrets: []
to suppress the diff.
@savealive , same to your case but with customResourceMutations: null
-> customResourceMutations: []
. Try adding it as customResourceMutations: []
to suppress the diff.
@jmatsushita , rancher2_app_v2
resource is usable. Some examples of apps v2 working fine when defining values.
The problem is caused by the combination of field type and the json/yaml tags definitions like omitempty
. Some distinguish between nil
and empty objects, some not.
Hi @rawmind0,
I've noticed that to reduce TF diff. plan I need to alphabetically order (at every level of the yaml) the values....
@julienym , alphabetically order has nothing to do for the diff generation (although the tf diff visualization may be confuse see my previous comment), any other diff should be there. As mentioned, the values
argument comparison between old and new values is done at low level using map[string]interface{}
and reflect.DeepEqual
, https://github.com/rancher/terraform-provider-rancher2/blob/master/rancher2/schema_app_v2.go#L129
Well, fact is, this makes reading the diff and actually finding the difference impossible. It doesn't keep the order. Simplest fix would be to sort the local values for diff generation.
Well, fact is, this makes reading the diff and actually finding the difference impossible. It doesn't keep the order. Simplest fix would be to sort the local values for diff generation.
Agreed that this make reading the diff and finding the differences difficult, but not impossible. I don't thing the proposed fix would be simple due to the argument is a string with yaml content (how to simply sort it??). Anyway, this fix should be submitted at terraform repo, due to the tf provider doesn't have any control on how tf diff is displayed
Same here with the issue. Every run I need to apply changes, very annoying....Any solution so far?
Nothing changed? any solution?
I have a similar issue, each time I run our CI/CD pipeline changes are reported for the v2 apps
-/+ resource "rancher2_app_v2" "applications" {
~ annotations = {
- "objectset.rio.cattle.io/applied" = "H4sIAAAAAAAA/7RYW2/buBL+KwT7KsmXOI4joA+nKXBQHJxukRT7sNsCHZEjiWuKFEjK2Wzg/74YSY7t+lI3tV+CWBx+c/u+IaVnXmEACQF4+swNVMhT7sCIEl0slI8zNKKswM151C77GgTZ0Jqt0UGwLvZPPmDFIy4cQlDWfFYV+gBVzVPTaB1xDRlqTz5s9heK4DEkTtlEQAgaE2UHJfiSp/xmfDvLAaZymstxPr6VNzCR49tsNpvMrm+zLBtlubyeTPgy4mCMDa2/o8hK8pSXqKsY6ppHB+3so0EXF4s5T/lgMYrY/5SRbx9QOAw/3NaXzpcJeUocagSPyWKU7K1mshidBHms3MuIt4b3mKNDI9Dz9M9nDrX6HZ1X1vCUt27mylAJXjJ5VaxNW8ahvBnCZJrHMM1u4glMruPb4WQW32RjiTdjvJmOp8QDa4KzWqPjaXANRjzTVsx/o3Dfo8bQRpeD9rj8uoy4r1GcQEBlcktmuXI+vMda2yekqMbD8SgejePR1efRbToaplfTPziR7hQriV44VXch8Q/GB9CaCVvVGgPyiPsAofE85XKFFXGHINtQ37D7Llh29+GBvVsFy+5KcOGL+WI+l8g2u8fQQKbRM9cYo0zR7ntJknkUjVPhiXkBxjNrGLB5k6EzGNAzoRsf0DEwkhVoCBK7WBUYgcxhbV3wLJQQmADDMmTSPhptQaJMKKA3rM+xVQ49+fbt2xdDVGCqz35vA1gyYHHcShzX5GSxYXvY2aMSSQWVgtr2E6Nm8cLgUTJMrobDnUZdpK4kFnx6tE6SlvhqE/8acSVat2UItU8HgzYpvxIMiRa8x+AH2hbWDygygu+SSvyi4NEeZUJdbz6hVPnOXBMQQNtiYz5AE2zc9+pQEWPh5NsKgihJjzsQAl1QuWqF0e/fayeVrzU8rebbFsn3bjjhkNjZY0ldWpnm773LtbMLJdHHxcJ1qOvFpO9c29ZBW9RdgH7CxT8g3e7GRsUkLmvQhIMbl8uIL0A32HYL8lwZ4kz6TEeURhfa5x09ZT/1IhKDU8J/sqSN2XA2jLjHBbZ0S/kjOCIxyafQNgPdEyFobP/rkv7Y5UMzqi3ue8yh0eEeC+WDIxwCIB6K0CLQePAq2Hatz2awWo94gKLj4WiWTNu8VAVF51H5VScPAm22exONWE1DvlfTgwBzEGNlFFND1yDDZJyMWxBrbNbYp8MAvUH8YrkBMppSKMuIGyvxATWKLp8lTXRvG9ceovQzWE2JdBqkO8yxLh9p0QcSmLYCqL7bbVs9Pda7w2vbIX9/7hsMj9bNlSmS+cwTlTdvAh+71U9WK/G0vhDIzk0MWttH+nvCnW8ZHb1xuIUS+B8hbGM2bh7bQJ0RbBldxOV5Pd1Zk6vi/1DvFnBjJtXO5kqjf403l4FIoAmldeqflov7mnnXObu3GrfTBVkpc3bUhcLHnwPNMMApyC/1cLR0JhfvlJE0SA97ynqLXy/UcV/9eY1u1xPUtd8C7W6tFR4SzWvYtHVs7kuB5vK77173evBRcn1uwOn5AHFOgMPzARZzPC+gm2Pc1zEuwUk0KC+DXqOrlPdqsUdBv44/vWj000tFP75s8cfnq/6n7rjYUV/cnyPnhZ2eG5Z62UO+lPoXhsdR/HWtL+VhI4PXTqvTMzinhw1CfpfKRZ2cn/002c9M0WLdAL78uvGlYXTKB84SdfUSwdVy/XHomfumqsD1bwQ2o8smyv9232ha8+Fy+W8AAAD//zVpjmxvFQAA"
- "objectset.rio.cattle.io/id" = "helm-app"
- "objectset.rio.cattle.io/owner-gvk" = "/v1, Kind=Secret"
- "objectset.rio.cattle.io/owner-name" = "sh.helm.release.v1.rancher-cis-benchmark.v1"
- "objectset.rio.cattle.io/owner-namespace" = "cis-operator-system"
} -> (known after apply)
chart_name = "rancher-cis-benchmark"
chart_version = "1.0.300"
cleanup_on_fail = true
~ cluster_id = "local" -> (known after apply) # forces replacement
~ cluster_name = "local" -> (known after apply)
disable_hooks = false
disable_open_api_validation = false
force_upgrade = false
~ id = "local.cis-operator-system/rancher-cis-benchmark" -> (known after apply)
~ labels = {
- "objectset.rio.cattle.io/hash" = "7298faa6d6fd2f29d7a4d29b884859bbb1bfd544"
} -> (known after apply)
name = "rancher-cis-benchmark"
namespace = "cis-operator-system"
~ project_id = "local:p-hq47c" -> (known after apply)
repo_name = "rancher-charts"
+ system_default_registry = (known after apply)
~ values = <<~EOT
affinity: {}
global:
cattle:
clusterId: local
clusterName: local
- systemDefaultRegistry: ""
- systemDefaultRegistry: ""
EOT
wait = true
}
# rancher2_app_v2.applications["rancher-logging"] must be replaced
-/+ resource "rancher2_app_v2" "applications" {
~ annotations = {
- "objectset.rio.cattle.io/applied" = "H4sIAAAAAAAA/+xbUXPbOJL+KyjkHsY+kpLsOHF0NQ+JncylJk5cdrJXe7YrgoCmhDUIcAFQjhLrv18BBClKlGQql92qrVrPw8QguvtDd6O70YB/4AwsYcQSPPyBJckAD7Emkk5Bx0JNJlxOcOQ/mJxQ95USawVUH2MzNxYyHGGqgViu5GeegbEky/FQFkJEWJAxCOP4q/HfgFoDNtFcJSWjhKvelJgpHuKXLwcnp8dAT14c0/Gr/mn/lJ30j+iL58DY8xfjdJCmbPD8hOBFhImUynp5Ozlzhod4CiKLSZ7jaOs89SBBx5PZPR7i3mwQoT+5ZL9fA9VgnyQLajPTxElKNAggBpLZIFnTZDIbdGK2W9WLCPupV5CCBknB4OHND0xy/hfQhiuJh9gLuufSLb9exZ44C688IDB4BS9fxYSlJH7ef5XGpwxO49PBKZwcpS9fwcmps76SVishQOOh1QVEeCwUvf/kgJ6DAOtxpUQYWNwtImxyoDtdjstUuQkp18aeQy7UHByeo/7RIB4cxYPjz4NXw0F/ePzyf7Fzsi6zGBiqeV6Cwe+lsUQIRFWWC7CAI2wssYXBQ8wqXhHWQJgHeSufoQ8lQKRy0MQqjc6mRNtbeStv1j/d/Ta1NjfDXm/C7bQYJ1RlvTGR3wmnQhWsV9m1IjhAF0SSCTBEQVpNBP8ODIVZHqWSIC1KRQHSMkQkC/+Ox9wi7pYjKSAlERWFsaATB+zZM2TFfzE9dL+MRqMxMdNb+R/IOQHSkCtEGEMNYLGxZCwAVfDvizFoCRZMTN1qTdKY7Fa1yq3IGbFQj/Gg5baAlgI8vAD5vfMnVlBnKzf0ecoN8uLRWClrrCa5QQTVav/082ovjZ053SrpeP5Zr7jktqKEhKuDSsGoME62nQK6+W8Q2VK432JmeoByQu/JBFDmbVub5FKDhr8X3HALxo3FaCkVDZLT/0QP3E7RG7AEvb58bxBIpzRW68ertZJeu+FnVWvcjZca85zcr2HPI7ft0Cibx2FgtNE5Kkaxj0poOX1PYz5DZ1fn5lZ+MYBGPk/AWWGsyq7AqEJT+N0HhlGJ0+kRzY6RVYjMFGfI6rlfpkIlreeGUq0yv6YR1cyMUKoEA11uCv8JslwQp0xi/Tzj1mB5BknpT+C2VObml/Y36PBwfQ8fHjqPcNQN41S2r7XKICWFsIgqmfJJoX1eSpATcbMydvfbs5XfD5AB7+FIcGONZ5YTTTKwoN2vxCJKJBpDzRsYYoV2GIN1SmHBK75Ivt0viupjj7mIDKX2Gk7Q2Agb/SGQLSlqGze1qSFTM6d2IVa0VsUvg4gxinJigS11WDpqaQ3hCRr+Wi3vrKm8Sm6qhFAPfsHOEU1Dl7WyXThrKFal/vO6yzZA2ClwXRt2RkQBxqN4RBt/Livmmz9voULofJmROhCeBzzNCQ5SvP8P+imqLoQlpBHPyAQSlxEMt0rPR9tU0FzsmZKWcAkaeXK0JH+CcDSZUu2qqF2RfrQCzZJJB0xboVky6UY4Ok5Ok6OtopqQ8kKISyU43VdbjhDlnrITpPfpR2UvNRiQti2qhORi/qcZaM0ZdFTUI6oIygyjUkTyvBPhaLeIElJaCLEnrAYkR90dV0dIdb3eHdOaljz1r4T0QCydfqw4dzLdI6rnuzzrOaBU6are/VSXu1fnPwNJjwlNQgHT2ZfOymzvaJEBPeMUEKFUFbKM0Vq5aL+BcOROINvFNCDlJt8D1iO6KIx12bgwVe5aW5qTnKD3qf9HFAoWg26Lfv/ohSMz6OrN6zOkQ+VjkK8CXVrnZaGxUl3cXCqGroEWmts58qGBV0WpWa9Ke0xR06NKUsit6ZXhoJcrFpvAIQ5jB1UxmQSFhfprp8JyzZVjciaIMc5dOnl65Tgt6icIRz8Wu/iXkEiacsltl2i55PxRMUCvA+U+hJ0g1YbdK7dcfuldQObSXEXv/QKMNT3BM243+HlnSFYJKEumzqCClj4vKTsTjm7unoYkFYNrEEB9Vu7I+RxSLgE9TDmdenxlkXepmEFEAzJ0CqwQwJCrvpuQOvnSsqPUXUsBUoPWR81WYZk7jHtDyhWr9r5L9fBtQ65uQWpGjEDVjOT10SZBNxdKA+IyVXsFlDqY0JJ776Cjun7ux+nqFutCvjYflbxSyt7iYQivYfyLAe0GB/1+3w2m5g+titwNHfX7/UVV95k9lblWZP2ztVqunLjTzaXmMy5gAm8NJeWRz63Oh22vBiDskxRzp553XEDZKqwUVStg09n7CS3UadgdupNlci0ZZkq66pzLSRIy9EU5sj2r1gwvtcrATqEwyywRmAS2SSM1PbrD12thQUti+QzEPEIE/fX1xQeUcgHlWdnkQHnKQ2AoT23eTGsH63CmzrWaceYy+dSxWDs6+8Nggt4pjeAbyXIBezZJ4rQ6OM5Jtl8DzNULx4AODz/z/PBwiP6qCg+6MLDScbhpCLj7rfHLQbtN9K7RNny3bBvOOKniVXWizjXMuCqqfhtTYNDhoVT28LBe7ijQjJYZy6rQQNgmqtGYRKt9qgzlpc6qBbbaetVO27u917MaoJcRJ7ZX9i9bc6q+80Ew+r9W2zSuzbdsn/6DOyVx3ZV+qmPybzX+EjX+u/HUufFkhdnzqIke0duSAH3+cO0bmYXk1BsdjcE+AMhm/7JJ6LLhLs41pHBrU97GdTs+eQHXIacxZDyp7xlEoQqmZXGCrDCIgm5ge0T+xuSBC4HUDPSD5hYQKazKiOW07LI7Ep66lQKagISqc72OesztvxhuMyUa2J/Q8Wj4iK49AbqHeW1x6Y8YvwXDxbUqDlYIbxy2AALYXYtz2biqaLt75mOVu5t3fN3W0sknl5Bk11ZR06g1+co9RKOhtUroL1+IWd6brk5ch9S5Q/sY6gsHhO7ToX1Eo0HyInn+RNNjHVKHfvYuSDv72bW43tLio6chdWgab4bUoWncsVUctsg+YXfdvVkHmkC4h3uz7p601BLbz5O8lmaDZDBITmIici4hHoyextTpamQ7pieuRrZciQQIo93QOt1DbILW6R5iT6eaKVFkcKHY+ydNuYS0SvSkQR/RyN8Vb+0J7IS0y5KdIa3b8xGNxoWZj9W3LZh2QtplwQ6QNttxT8OVxfAVCEUY6J3WW/GldaJd1nNbr58cbb9fexrSFuvtB6lhvUc0+hvPMs7mRsleSZyRPNaewagLpC3W6wqpZb09DRf+fzmjLt8nhFIw5sJVQy3KJaQctOHGgqQQ/AqVhCjzdVSLcHRzBYT9j6vvPkkK613jnZDqLntStckTY5UmExjthmT496254xGNjgZ/8KcujrZACvI33o7shlQSUke4AdItdv/tisO3Ei8i7I+QeNjxReesfirob6f7g9bruDMlBFBryhc1XLgjplATE548TflkKuarB96yRXipHsCV0uM5euMzDzpzqQetd3QSHOF7mD8ozQwe3uAluGVXEUe4aoziuwhz6qFVrYHQDwirc6mOGANlf0fVXZ7EzByf9vtIkud/WVUDbj0spcQSoSaNZ5qu3I9DzdtWbUw1+z0jlk5xtIE4HFz8G8VAuXEe4yYXZF49Lf1Qq6Y9tdPb3BaVMniIBZfFt42fQ2fUxJOZdhODJpudF66S0MxLhXrozQZjsGSwkVtoiMZbvbFNUvC4PnAvAeDFIsJlT8QZhzDGnaWICAq6LoOC/3bv/xcq0urpaYRhy/jkHjaO3x+XflAF168gJ1z6d7ReuwxHbSK9iVmE68rdfRRq8lXADJwT+ceuEc4g+zou0hT0V3/ph4f45OKNX7S+h6MN+BYRru5A8fBH6130IjzLXmv514AYN47d5YxWr3ZriF8bt4b+fTGkKVAH6aO6Dhdt5QbGQ+xOyrFWApLVm47wKjgXRLrJ3nJ4iJ0ovIhWeL79BrSwT7MES1mL1V2E15+G4CHGEZ4INSYibGQrvFVKu4WO1hVMuLHaSfR6dqHFTcqVC6an/dO+i0D+UsJ7VUu95RZ4f+kX4VdZv3dfRNjOc4fkrJ7lZPiKwQup0zwe4maC9q+Oq4JiuWOalXy83kvEEbZkUkeySs5lIUTZuzHVg3w/Xnl1yidfdagj3NBGuct6Jl6vZ2qxZRnm5K64+UZ+4Xy7PObWXPyZfIXJVwbjYvKzrOKSumbItjJq6raaW69t7XTnzdjIUs6rVq6/Nm/Vtn+2Xg+F4cb1uGO1ugUasXsR4fYlcel6rQcX5bAeE7qCrtz3uck3Y9aQC07JmSpcIB64AVNH2cUyPa/I3id2rOSkCNfuPMRv/14Q0djrYcl3q0mgHcM2bfpqnzKfTKjnG8ZK3dSj28PD7m+d3cHfyLY1u1hR7e4/6PC8X5dPoZZ/2PEzf7fjovCKIP+ciRR2qjT/XrZV70+92zUhhIB2pbwht8j/lbwr2IRlXP5DOM84POzHuCp42tzfcMmWFuiimg32/cWGJXluVrRxXj/2/sWSNleLbYV9WBHyq6Q3niRQpUEZf0O8Qb/V3vz/ib9rnKUG3f5Sbgoiq/EeL5Z/dfQDmyLLiA61nBq7KALsj/qGAw/7i8X/BQAA//+5CbhetDcAAA"
- "objectset.rio.cattle.io/id" = "helm-app"
- "objectset.rio.cattle.io/owner-gvk" = "/v1, Kind=Secret"
- "objectset.rio.cattle.io/owner-name" = "sh.helm.release.v1.rancher-logging.v1"
- "objectset.rio.cattle.io/owner-namespace" = "cattle-logging-system"
} -> (known after apply)
chart_name = "rancher-logging"
chart_version = "3.8.201"
cleanup_on_fail = true
~ cluster_id = "local" -> (known after apply) # forces replacement
~ cluster_name = "local" -> (known after apply)
disable_hooks = false
disable_open_api_validation = false
force_upgrade = false
~ id = "local.cattle-logging-system/rancher-logging" -> (known after apply)
~ labels = {
- "objectset.rio.cattle.io/hash" = "771583ec563cb90808d502c64edd46bf1ffd145a"
} -> (known after apply)
name = "rancher-logging"
namespace = "cattle-logging-system"
~ project_id = "local:p-hq47c" -> (known after apply)
repo_name = "rancher-charts"
+ system_default_registry = (known after apply)
~ values = <<~EOT
disablePvc: true
global:
cattle:
clusterId: local
clusterName: local
- systemDefaultRegistry: ""
- systemDefaultRegistry: ""
monitoring:
serviceMonitor:
enabled: true
replicaCount: 1
EOT
wait = true
}
```
@mikekuzak , in your case, the problem seems to be at how rancher2_app_v2.cluster_id
is defined (known after apply). Changes on this argument force the resource to be replaced,
~ cluster_id = "local" -> (known after apply) # forces replacement
Hi, @rawmind0
Thanks for that. Is there a better way of doing this ? I'm using data "rancher2_cluster" to get the cluster id.
data "rancher2_cluster" "local" {
provider = rancher2.admin
name = "local"
depends_on = [null_resource.wait_for_rancher, rancher2_bootstrap.admin]
}
resource "rancher2_app_v2" "applications" {
for_each = { for k, v in local.system_apps : k => v if v.enabled == true }
provider = rancher2.admin
cluster_id = data.rancher2_cluster.local.id
project_id = data.rancher2_cluster.local.default_project_id
name = each.key
namespace = each.value.namespace
repo_name = each.value.repo_name
chart_name = each.key
chart_version = each.value.chart_version
values = each.value.values
cleanup_on_fail = true
wait = true
depends_on = [rancher2_catalog_v2.catalogs_v2]
}
@rawmind0 Hi Raul,
Do you think this will do the trick ?
lifecycle {
ignore_changes = [
# Ignore changes to tags, e.g. because a management agent
# updates these based on some ruleset managed elsewhere.
cluster_id, project_id
]
}
Maybe DiffSuppressOnRefresh should be able to fix this. DiffSuppressOnRefresh
From the Changelog:
helper/schema: The Schema type DiffSuppressOnRefresh field opts in to using DiffSuppressFunc to detect normalization changes during refresh, using the same rules as for planning. This can prevent normalization cascading downstream and producing confusing changes in other resources, and will avoid reporting "Values changed outside of Terraform" for normalization-only situations. This is a desirable behavior for most attributes that have DiffSuppressFunc and so would ideally be on by default, but it is opt-in for backward compatibility reasons. (#882)
As @julienym mentioned above, recursively alphabetizing the keys in your yaml files will greatly reduce or eliminate this kind of diff noise as rancher seems to reliably return manifests with keys alphabetized, so terraform is able to do a simple diff
We used yq
to do this:
yq -i 'sort_keys('.')' <yaml file>
But you could also achieve the same with a few lines of python or your preferred programming language
single quotes vs double quotes empty lists vs []
I have format my values.yml that i push via terraform with the ' changed to " and [] 's removed
Verified on v2.7-head ID: 4a9b7ba
I followed the suggested testing flow in the PR - https://github.com/rancher/terraform-provider-rancher2/pull/1021 - and was able to confirm it's working as expected now for both k8s v1.24 (and older) and k8s 1.25
Verified on v2.7-head ID: 4a9b7ba
I followed the suggested testing flow in the PR - #1021 - and was able to confirm it's working as expected now for both k8s v1.24 (and older) and k8s 1.25
I'm on Rancher 2.6.13 and k8s v1.24 with Rancher Provider 3.3.1 and still have this same problem. Can you explain what is working as expected?