kustomize-controller icon indicating copy to clipboard operation
kustomize-controller copied to clipboard

Kustomization reconciliation failed: too many fields within:

Open castorls opened this issue 3 years ago • 16 comments

I've installed Flux v0.23.0 and some kustomizations that should be working cannot be updated giving me this error: 2021-11-15T08:40:40.715Z error Kustomization/rights.flux-system - Reconciliation failed after 23.465925ms, next try in 5m0s too many fields within: exploitation_clusterRole_rbac.authorization.k8s.io

As far I can search, the error message is ent by apply mechanism when the object metadata doesn't have the correct format (in this case, the "_" between clusterRole and rbac whould e replaced by ".".

An poor-man bissect give me that the faulty release is the 0.15.0 of kustomize-controller (and I may put another penny on https://github.com/fluxcd/kustomize-controller/commit/3a03d235c241e097e0b6c5d293eca1e55e7c3917 without any validation).

A clean install of 0.23.0 of flux cd, followed by a downgrade of kustomize-controller to version 0.14.1 fixes the issue.

castorls avatar Nov 15 '21 08:11 castorls

Can you post an RBAC example of how to replicate this error?

stefanprodan avatar Nov 15 '21 10:11 stefanprodan

here's the tar.gz of both repositories used in my test (flux-system and the linked admin repository).

Here's the command to add administration repo :

flux create source git administration \
  --interval 5m0s \
  --branch "${CLUSTER_NAME}" \
  --secret-ref flux-system \
  --url "${GIT_BASE_URL}${ADMIN_REPOSITORY}"
  
flux create kustomization rights \
  --source=GitRepository/administration \
  --path="./rights" \
  --prune=true \
  --interval=5m

flux create kustomization delegations \
    --source=GitRepository/administration \
    --path="./delegations" \
    --prune=true \
    --interval=5m

administration.tar.gz flux-system.tar.gz

castorls avatar Nov 15 '21 13:11 castorls

Can you please confirm that your RBAC can be applied on the cluster with kubectl apply --server-side -f.

stefanprodan avatar Nov 15 '21 13:11 stefanprodan

yes it's working :

 kubectl apply --server-side -f ../rights/exploitation_clusterrole.yaml 
clusterrole.rbac.authorization.k8s.io/ops_clusterRole2 serverside-applied

Note the problem seems to be only on update.

castorls avatar Nov 15 '21 15:11 castorls

Having a similar issue with the Thanos monitoring mixin. The initial error message wasn't very helpful:

{"level":"error","ts":"2021-11-19T10:41:41.825Z","logger":"controller.kustomization","msg":"Reconciliation failed after 7.335991041s, next try in 1m0s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"kube-prometheus-components","namespace":"monitoring","revision":"master/958670aabba07f97fdebbd0287076fd34877ae83","error":"too many fields within: grafana-dashboard-bucket_replicate_","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227"}                                                                                                                                                                                     
{"level":"debug","ts":"2021-11-19T10:41:41.826Z","logger":"events","msg":"Warning","object":{"kind":"Kustomization","namespace":"monitoring","name":"kube-prometheus-components","uid":"e8ee9cd5-4701-4519-bc41-93ef37b7f760","apiVersion":"kustomize.toolkit.fluxcd.io/v1beta2","resourceVersion":"52741687"},"reason":"error","message":"too many fields within: grafana-dashboard-bucket_replicate_"}
# $ kubectl -n monitoring get kustomization kube-prometheus-components -o yaml

apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  creationTimestamp: "2021-11-10T14:59:40Z"
  finalizers:
  - finalizers.fluxcd.io
  generation: 5
  labels:
    kustomize.toolkit.fluxcd.io/name: infra
    kustomize.toolkit.fluxcd.io/namespace: flux-system
  managedFields:
  - apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          f:kustomize.toolkit.fluxcd.io/name: {}
          f:kustomize.toolkit.fluxcd.io/namespace: {}
      f:spec:
        f:dependsOn: {}
        f:interval: {}
        f:path: {}
        f:prune: {}
        f:sourceRef:
          f:kind: {}
          f:name: {}
          f:namespace: {}
        f:targetNamespace: {}
    manager: kustomize-controller
    operation: Apply
    time: "2021-11-16T11:29:39Z"
  - apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .: {}
          v:"finalizers.fluxcd.io": {}
      f:status:
        f:conditions: {}
        f:inventory:
          .: {}
          f:entries: {}
        f:lastAppliedRevision: {}
        f:lastAttemptedRevision: {}
        f:observedGeneration: {}
    manager: kustomize-controller
    operation: Update
    time: "2021-11-10T15:05:56Z"
  name: kube-prometheus-components
  namespace: monitoring
  resourceVersion: "52722166"
  uid: e8ee9cd5-4701-4519-bc41-93ef37b7f760
spec:
  dependsOn:
  - name: kube-prometheus-setup
    namespace: monitoring
  - name: kube-prometheus-install
    namespace: kube-system
  force: false
  interval: 1m
  path: kubernetes/clusters/management/kube-prometheus/components
  prune: true
  sourceRef:
    kind: GitRepository
    name: infra
    namespace: flux-system
  targetNamespace: monitoring
status:
  conditions:
  - lastTransitionTime: "2021-11-19T10:16:21Z"
    message: 'too many fields within: grafana-dashboard-bucket_replicate_'
    reason: ReconciliationFailed
    status: "False"
    type: Ready
  inventory:
    entries:
    - id: monitoring_alertmanager-main__ServiceAccount
      v: v1
    - id: monitoring_grafana__ServiceAccount
      v: v1
    - id: monitoring_prometheus-k8s__ServiceAccount
      v: v1
    - id: monitoring_prometheus-k8s-config_rbac.authorization.k8s.io_Role
      v: v1
    - id: monitoring_prometheus-k8s_rbac.authorization.k8s.io_ClusterRole
      v: v1
    - id: monitoring_prometheus-k8s-config_rbac.authorization.k8s.io_RoleBinding
      v: v1
    - id: monitoring_prometheus-k8s_rbac.authorization.k8s.io_ClusterRoleBinding
      v: v1
    - id: monitoring_grafana-dashboard-alertmanager-overview__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-apiserver__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-cert-manager__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-cluster-total__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-controller-manager__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-k8s-resources-cluster__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-k8s-resources-namespace__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-k8s-resources-node__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-k8s-resources-pod__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-k8s-resources-workload__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-k8s-resources-workloads-namespace__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-kubelet__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-namespace-by-pod__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-namespace-by-workload__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-node-cluster-rsrc-use__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-node-rsrc-use__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-nodes__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-persistentvolumesusage__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-pod-total__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-prometheus__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-prometheus-remote-write__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-proxy__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-scheduler__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboard-workload-total__ConfigMap
      v: v1
    - id: monitoring_grafana-dashboards__ConfigMap
      v: v1
    - id: monitoring_alertmanager-main__Secret
      v: v1
    - id: monitoring_grafana-config__Secret
      v: v1
    - id: monitoring_grafana-datasources__Secret
      v: v1
    - id: monitoring_monitoring-http-basic-auth__Secret
      v: v1
    - id: monitoring_alertmanager-main__Service
      v: v1
    - id: monitoring_grafana__Service
      v: v1
    - id: monitoring_prometheus-k8s__Service
      v: v1
    - id: monitoring_prometheus-k8s-thanos-sidecar__Service
      v: v1
    - id: monitoring_grafana_apps_Deployment
      v: v1
    - id: monitoring_alertmanager-main_policy_PodDisruptionBudget
      v: v1
    - id: monitoring_main_monitoring.coreos.com_Alertmanager
      v: v1
    - id: monitoring_k8s_monitoring.coreos.com_Prometheus
      v: v1
    - id: monitoring_alertmanager-main-rules_monitoring.coreos.com_PrometheusRule
      v: v1
    - id: monitoring_cert-manager_monitoring.coreos.com_PrometheusRule
      v: v1
    - id: monitoring_kube-prometheus-rules_monitoring.coreos.com_PrometheusRule
      v: v1
    - id: monitoring_prometheus-k8s-prometheus-rules_monitoring.coreos.com_PrometheusRule
      v: v1
    - id: monitoring_prometheus-k8s-thanos-sidecar-rules_monitoring.coreos.com_PrometheusRule
      v: v1
    - id: monitoring_prometheus-operator-rules_monitoring.coreos.com_PrometheusRule
      v: v1
    - id: monitoring_alertmanager-main_monitoring.coreos.com_ServiceMonitor
      v: v1
    - id: monitoring_grafana_monitoring.coreos.com_ServiceMonitor
      v: v1
    - id: monitoring_prometheus-k8s_monitoring.coreos.com_ServiceMonitor
      v: v1
    - id: monitoring_prometheus-operator_monitoring.coreos.com_ServiceMonitor
      v: v1
    - id: monitoring_thanos-sidecar_monitoring.coreos.com_ServiceMonitor
      v: v1
    - id: monitoring_alertmanager-main_networking.k8s.io_Ingress
      v: v1
    - id: monitoring_grafana_networking.k8s.io_Ingress
      v: v1
    - id: monitoring_prometheus-k8s_networking.k8s.io_Ingress
      v: v1
  lastAppliedRevision: master/77627c84f65fd8ee95204f2b3e5760f2fdb7cd46
  lastAttemptedRevision: master/2e58eb07bd4136babce9969f823f6a6ac643f21e
  observedGeneration: 5

After deleting the above Kustomization, I got a "proper" error:

{"level":"error","ts":"2021-11-19T10:53:28.010Z","logger":"controller.kustomization","msg":"Reconciliation failed after 1.025864206s, next try in 1m0s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"kube-prometheus-components","namespace":"monitoring","revision":"master/958670aabba07f97fdebbd0287076fd34877ae83","error":"ConfigMap/monitoring/grafana-dashboard-bucket_replicate dry-run failed, reason: Invalid, error: ConfigMap \"grafana-dashboard-bucket_replicate\" is invalid: metadata.name: Invalid value: \"grafana-dashboard-bucket_replicate\": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')\n","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227"}                                                                                                                                                                                                                    
{"level":"debug","ts":"2021-11-19T10:53:28.011Z","logger":"events","msg":"Warning","object":{"kind":"Kustomization","namespace":"monitoring","name":"kube-prometheus-components","uid":"63f8f759-d6d8-43d3-ac61-0c38d8253286","apiVersion":"kustomize.toolkit.fluxcd.io/v1beta2","resourceVersion":"52752092"},"reason":"error","message":"ConfigMap/monitoring/grafana-dashboard-bucket_replicate dry-run failed, reason: Invalid, error: ConfigMap \"grafana-dashboard-bucket_replicate\" is invalid: metadata.name: Invalid value: \"grafana-dashboard-bucket_replicate\": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')\n"}
# $ kubectl -n monitoring get kustomization kube-prometheus-components -o yaml

apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  creationTimestamp: "2021-11-19T10:53:26Z"
  finalizers:
  - finalizers.fluxcd.io
  generation: 2
  labels:
    kustomize.toolkit.fluxcd.io/name: infra
    kustomize.toolkit.fluxcd.io/namespace: flux-system
  managedFields:
  - apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          f:kustomize.toolkit.fluxcd.io/name: {}
          f:kustomize.toolkit.fluxcd.io/namespace: {}
      f:spec:
        f:dependsOn: {}
        f:path: {}
        f:prune: {}
        f:sourceRef:
          f:kind: {}
          f:name: {}
          f:namespace: {}
        f:targetNamespace: {}
    manager: kustomize-controller
    operation: Apply
    time: "2021-11-19T10:53:26Z"
  - apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .: {}
          v:"finalizers.fluxcd.io": {}
      f:spec:
        f:interval: {}
      f:status:
        f:conditions: {}
        f:lastAttemptedRevision: {}
        f:observedGeneration: {}
    manager: kustomize-controller
    operation: Update
    time: "2021-11-19T10:53:28Z"
  name: kube-prometheus-components
  namespace: monitoring
  resourceVersion: "52752917"
  uid: 63f8f759-d6d8-43d3-ac61-0c38d8253286
spec:
  dependsOn:
  - name: kube-prometheus-setup
    namespace: monitoring
  - name: kube-prometheus-install
    namespace: kube-system
  force: false
  interval: 1m0s
  path: kubernetes/clusters/management/kube-prometheus/components
  prune: true
  sourceRef:
    kind: GitRepository
    name: infra
    namespace: flux-system
  targetNamespace: monitoring
status:
  conditions:
  - lastTransitionTime: "2021-11-19T10:54:29Z"
    message: |
      ConfigMap/monitoring/grafana-dashboard-bucket_replicate dry-run failed, reason: Invalid, error: ConfigMap "grafana-dashboard-bucket_replicate" is invalid: metadata.name: Invalid value: "grafana-dashboard-bucket_replicate": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
    reason: ReconciliationFailed
    status: "False"
    type: Ready
  lastAttemptedRevision: master/958670aabba07f97fdebbd0287076fd34877ae83
  observedGeneration: 2

kubeval gives the file a pass, as does client side kubectl apply dry-run:

kubeval kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains an empty YAML document
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-alertmanager-overview)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-apiserver)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-bucket_replicate)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-cluster-total)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-compact)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-controller-manager)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-k8s-resources-cluster)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-k8s-resources-namespace)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-k8s-resources-node)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-k8s-resources-pod)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-k8s-resources-workload)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-k8s-resources-workloads-namespace)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-kubelet)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-namespace-by-pod)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-namespace-by-workload)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-node-cluster-rsrc-use)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-node-rsrc-use)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-nodes)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-overview)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-persistentvolumesusage)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-pod-total)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-prometheus-remote-write)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-prometheus)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-proxy)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-query)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-query_frontend)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-receive)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-rule)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-scheduler)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-sidecar)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-store)
PASS - kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml contains a valid ConfigMap (monitoring.grafana-dashboard-workload-total)
$ kubectl -n monitoring apply -f kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml --dry-run=client
configmap/grafana-dashboard-alertmanager-overview created (dry run)
configmap/grafana-dashboard-apiserver created (dry run)
configmap/grafana-dashboard-bucket_replicate created (dry run)
configmap/grafana-dashboard-cluster-total created (dry run)
configmap/grafana-dashboard-compact created (dry run)
configmap/grafana-dashboard-controller-manager created (dry run)
configmap/grafana-dashboard-k8s-resources-cluster created (dry run)
configmap/grafana-dashboard-k8s-resources-namespace created (dry run)
configmap/grafana-dashboard-k8s-resources-node created (dry run)
configmap/grafana-dashboard-k8s-resources-pod created (dry run)
configmap/grafana-dashboard-k8s-resources-workload created (dry run)
configmap/grafana-dashboard-k8s-resources-workloads-namespace created (dry run)
configmap/grafana-dashboard-kubelet created (dry run)
configmap/grafana-dashboard-namespace-by-pod created (dry run)
configmap/grafana-dashboard-namespace-by-workload created (dry run)
configmap/grafana-dashboard-node-cluster-rsrc-use created (dry run)
configmap/grafana-dashboard-node-rsrc-use created (dry run)
configmap/grafana-dashboard-nodes created (dry run)
configmap/grafana-dashboard-overview created (dry run)
configmap/grafana-dashboard-persistentvolumesusage created (dry run)
configmap/grafana-dashboard-pod-total created (dry run)
configmap/grafana-dashboard-prometheus-remote-write created (dry run)
configmap/grafana-dashboard-prometheus created (dry run)
configmap/grafana-dashboard-proxy created (dry run)
configmap/grafana-dashboard-query created (dry run)
configmap/grafana-dashboard-query_frontend created (dry run)
configmap/grafana-dashboard-receive created (dry run)
configmap/grafana-dashboard-rule created (dry run)
configmap/grafana-dashboard-scheduler created (dry run)
configmap/grafana-dashboard-sidecar created (dry run)
configmap/grafana-dashboard-store created (dry run)
configmap/grafana-dashboard-workload-total created (dry run)

However, server side kubectl apply dry run fails with the exact error the Kustomize controller is throwing:

kubectl -n monitoring apply -f kubernetes/clusters/management/kube-prometheus/components/grafana-dashboardDefinitions.yaml --dry-run=server
configmap/grafana-dashboard-alertmanager-overview created (server dry run)
configmap/grafana-dashboard-apiserver created (server dry run)
configmap/grafana-dashboard-cluster-total created (server dry run)
configmap/grafana-dashboard-compact created (server dry run)
configmap/grafana-dashboard-controller-manager created (server dry run)
configmap/grafana-dashboard-k8s-resources-cluster created (server dry run)
configmap/grafana-dashboard-k8s-resources-namespace created (server dry run)
configmap/grafana-dashboard-k8s-resources-node created (server dry run)
configmap/grafana-dashboard-k8s-resources-pod created (server dry run)
configmap/grafana-dashboard-k8s-resources-workload created (server dry run)
configmap/grafana-dashboard-k8s-resources-workloads-namespace created (server dry run)
configmap/grafana-dashboard-kubelet created (server dry run)
configmap/grafana-dashboard-namespace-by-pod created (server dry run)
configmap/grafana-dashboard-namespace-by-workload created (server dry run)
configmap/grafana-dashboard-node-cluster-rsrc-use created (server dry run)
configmap/grafana-dashboard-node-rsrc-use created (server dry run)
configmap/grafana-dashboard-nodes created (server dry run)
configmap/grafana-dashboard-overview created (server dry run)
configmap/grafana-dashboard-persistentvolumesusage created (server dry run)
configmap/grafana-dashboard-pod-total created (server dry run)
configmap/grafana-dashboard-prometheus-remote-write created (server dry run)
configmap/grafana-dashboard-prometheus created (server dry run)
configmap/grafana-dashboard-proxy created (server dry run)
configmap/grafana-dashboard-query created (server dry run)
configmap/grafana-dashboard-receive created (server dry run)
configmap/grafana-dashboard-rule created (server dry run)
configmap/grafana-dashboard-scheduler created (server dry run)
configmap/grafana-dashboard-sidecar created (server dry run)
configmap/grafana-dashboard-store created (server dry run)
configmap/grafana-dashboard-workload-total created (server dry run)
Error from server (Invalid): ConfigMap "grafana-dashboard-bucket_replicate" is invalid: metadata.name: Invalid value: "grafana-dashboard-bucket_replicate": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
Error from server (Invalid): ConfigMap "grafana-dashboard-query_frontend" is invalid: metadata.name: Invalid value: "grafana-dashboard-query_frontend": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

Server side apply (non dry run) also fails with the same error.

Flux version info:

$ flux version
flux: v0.23.0
helm-controller: v0.12.0
image-automation-controller: v0.15.0
image-reflector-controller: v0.12.0
kustomize-controller: v0.15.5
notification-controller: v0.17.1
source-controller: v0.16.0

$ flux check
► checking prerequisites
✔ Kubernetes 1.21.5-gke.1802 >=1.19.0-0
► checking controllers
✔ helm-controller: deployment ready
► ghcr.io/fluxcd/helm-controller:v0.12.0
✔ image-automation-controller: deployment ready
► ghcr.io/fluxcd/image-automation-controller:v0.15.0
✔ image-reflector-controller: deployment ready
► ghcr.io/fluxcd/image-reflector-controller:v0.12.0
✔ kustomize-controller: deployment ready
► ghcr.io/fluxcd/kustomize-controller:v0.15.5
✔ notification-controller: deployment ready
► ghcr.io/fluxcd/notification-controller:v0.17.1
✔ source-controller: deployment ready
► ghcr.io/fluxcd/source-controller:v0.16.0
✔ all checks passed

Kubernetes version info:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.6", GitCommit:"d921bc6d1810da51177fbd0ed61dc811c5228097", GitTreeState:"clean", BuildDate:"2021-10-27T17:50:34Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.5-gke.1802", GitCommit:"d464a255a4acb8dca1e0914586f4f5085681fb51", GitTreeState:"clean", BuildDate:"2021-10-21T21:43:23Z", GoVersion:"go1.16.7b7", Compiler:"gc", Platform:"linux/amd64"}

I'd rather not post the grafana-dashboardDefinitions.yaml file as it's like a bajillion lines and I'm not sure if it contains anything sensitive. Am I correct in assessing the situation (in my case) as an issue with the Thanos mixin rather than flux or k8s?

itspngu avatar Nov 19 '21 11:11 itspngu

Am I correct in assessing the situation (in my case) as an issue with the Thanos mixin rather than flux or k8s?

Yes, if Kubernetes API server-side apply validation finds the manifest invalid, it's nothing we can do in Flux.

stefanprodan avatar Nov 19 '21 11:11 stefanprodan

Hi, we're facing the same issue as @castorls with flux 0.23.0. The error message says: "too many fields within: xyz-admin-teamabc_admin_rbac.authorization.k8s.io"

Appliying on the rolebinding-manifest:

  1. kubeval --> says o.k.
  2. kubectl apply --dry-run=client --> says o.k.
  3. kubectl apply --dry-run=server --> says o.k.
  4. kubectl apply --> Resources are applied to cluster

Exactly the same gitops-repo was applied earlier (flux 0.17.1) to another cluster without any problems.

I also downgraded kustomize-controller to v0.14.1 as suggested by @castorls and it works immediately.

Nevertheless flux2 is a great tool!

doc-olliday avatar Nov 25 '21 07:11 doc-olliday

@doc-olliday Flux does kubectl apply --server-side -f can you please try this with your RBAC. If applying with --server-side works, please post here the RBAC so we can replicate the issue. Thanks.

stefanprodan avatar Nov 25 '21 07:11 stefanprodan

kubectl apply --server-side -f worked without any problem.

Here is the content of the manifest. Some sensitive data is anonymised:

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: xxx-admin-abcabcabc_admin
  namespace: foo
subjects:
  - kind: Group
    name: abcabcabc_admin
roleRef:
  kind: ClusterRole
  name: crc-namespace-admin
  apiGroup: rbac.authorization.k8s.io
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: xxx-edit-abcabcabc_edit
  namespace: foo
subjects:
  - kind: Group
    name: abcabcabc
  - kind: Group
    name: abcabcabc_edit
roleRef:
  kind: ClusterRole
  name: crc-namespace-edit
  apiGroup: rbac.authorization.k8s.io
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: xxx-view-abcabcabc_view
  namespace: foo
subjects:
  - kind: Group
    name: abcabcabc_view
roleRef:
  kind: ClusterRole
  name: crc-namespace-view
  apiGroup: rbac.authorization.k8s.io

doc-olliday avatar Nov 25 '21 08:11 doc-olliday

This error comes from an upstream library sigs.k8s.io/cli-utils because it incorrectly assumes a resource cannot have an underscore in its name.

A workaround for this error is to remove an underscore from the names of your resources .

somtochiama avatar Nov 25 '21 13:11 somtochiama

For ability to rename (remove underscores) without deleting any resources, older flux v2 is needed, one without this bug.

But if flux v2 has been upgraded to a version with this bug, and one is fine with deleting+recreating resource with underscores in name, notice that you will have to delete them manually (kubectl or helm CLI) but one will also have to delete the Kustomization resource was managed under since flux v2 maintains inventory in status subresource of Kustomization and kustomize-controller will not successfully reconcile the Kustomization resource until inventory has been cleaned up too.

For status inventory subresource, workaround that for me didn't delete any other resource/HelmRelease (other than one with underscores) in same Kustomization, but recovered Kustomization to be able to reconcile again:

  • scale down kustomize-controller to 0
  • remove the finalizer and delete the Kustomization which has inventory item with underscore in resource name (if it's bootstrap/root Kustomization, save yaml of it before deleting, and edit the yaml, remove all of the status, annotations, finalizers, generation, resourceVersion, uid i.e. keep apiVersion, kind, metadata.name, metadata.namespace, and all of spec.*)
  • scale back up kustomize-controller
  • recreate the deleted "bootstrap" Kustomization (or watch kustomize-controller recreate it, if it's a "leaf" Kustomization that had issues)

stevo-f3 avatar Dec 07 '21 16:12 stevo-f3

This error comes from an upstream library sigs.k8s.io/cli-utils because it incorrectly assumes a resource cannot have an underscore in its name.

A workaround for this error is to remove an underscore from the names of your resources .

Pretty sure RFC1123-style naming is still a thing.

Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names

gladiatr72 avatar Dec 13 '21 22:12 gladiatr72

The documentation for roleBinding says that their names must be valid path segment names. That restricts them to not contain things like ., .., /, etc. but doesn't seem to require RFC 1123 names.

marcsaegesser avatar Jan 11 '22 20:01 marcsaegesser

I am impacted by this issue too, I removed the problematic field but the controller still display the error although the resource does not exist in the git repo anymore.

mtparet avatar Feb 18 '22 15:02 mtparet

Even though I fixed _-s in the descriptor names, and made sure the tree of Flux resources (sources, kustomizations) referred to the proper revision, I wasn't able to reconcile the actual revision, the kustomization controller still threw an error. (Just to be sure, manually deleted the relevant RBAC resources as well.)

What solved the problem is the manual deletion of the parent kustomization, and its manual reconciliation.

ntjn avatar Jun 30 '22 06:06 ntjn

This still appears to be an issue with v2.1.1.. Happy to be told that underscore or colon is not a valid character in the resource name, but we are able to server-side apply a resource with those characters in name.

scpandit avatar Sep 25 '23 23:09 scpandit