flux2
flux2 copied to clipboard
Namespace Labels missing
Describe the bug
Namespace Labels aren't synced with Kubernetes Cluster.
Steps to reproduce
- Create age-key with sops
- Bootstrap flux with a namespace which contains a label e.g.
goldilocks.fairwinds.com/enabled: true
- All namespaces have defined labels (pod-security-standards and sometimes more) which aren't added in the Kubernetes cluster e.g. looks after bootstrapping like this:
apiVersion: v1
kind: Namespace
metadata:
labels:
kubernetes.io/metadata.name: longhorn-system
kustomize.toolkit.fluxcd.io/name: longhorn
kustomize.toolkit.fluxcd.io/namespace: flux-system
name: longhorn-system
- Run kustomize build locally in folder e.g. (/infrastructure/oracle/longhorn):
kustomize build --load-restrictor=LoadRestrictionsNone > output.yaml
Namespace looks like this:
apiVersion: v1
kind: Namespace
metadata:
labels:
goldilocks.fairwinds.com/enabled: true
name: longhorn-system
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/audit-version: v1.29
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/enforce-version: v1.29
pod-security.kubernetes.io/warn: privileged
pod-security.kubernetes.io/warn-version: v1.29
name: longhorn-system
Expected behavior
Flux should sync the defined labels to the Kubernetes Cluster.
namespace should look like definied:
apiVersion: v1
kind: Namespace
metadata:
labels:
kubernetes.io/metadata.name: longhorn-system
kustomize.toolkit.fluxcd.io/name: longhorn
kustomize.toolkit.fluxcd.io/namespace: flux-system
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/audit-version: v1.29
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/enforce-version: v1.29
pod-security.kubernetes.io/warn: privileged
pod-security.kubernetes.io/warn-version: v1.29
name: longhorn-system
goldilocks.fairwinds.com/enabled: true
name: longhorn-system
Screenshots and recordings
No response
OS / Distro
Ubuntu 22.04.3 LTS
Flux version
v2.2.3
Flux check
► checking prerequisites ✔ Kubernetes 1.29.1 >=1.26.0-0 ► checking version in cluster ✔ distribution: flux-v2.2.3 ✔ bootstrapped: true ► checking controllers ✔ helm-controller: deployment ready ► ghcr.io/fluxcd/helm-controller:v0.37.4 ✔ kustomize-controller: deployment ready ► ghcr.io/fluxcd/kustomize-controller:v1.2.2 ✔ notification-controller: deployment ready ► ghcr.io/fluxcd/notification-controller:v1.2.4 ✔ source-controller: deployment ready ► ghcr.io/fluxcd/source-controller:v1.2.4 ► checking crds ✔ alerts.notification.toolkit.fluxcd.io/v1beta3 ✔ buckets.source.toolkit.fluxcd.io/v1beta2 ✔ gitrepositories.source.toolkit.fluxcd.io/v1 ✔ helmcharts.source.toolkit.fluxcd.io/v1beta2 ✔ helmreleases.helm.toolkit.fluxcd.io/v2beta2 ✔ helmrepositories.source.toolkit.fluxcd.io/v1beta2 ✔ kustomizations.kustomize.toolkit.fluxcd.io/v1 ✔ ocirepositories.source.toolkit.fluxcd.io/v1beta2 ✔ providers.notification.toolkit.fluxcd.io/v1beta3 ✔ receivers.notification.toolkit.fluxcd.io/v1 ✔ all checks passed
Git provider
GitHub
Container Registry provider
No response
Additional context
No response
Code of Conduct
- [X] I agree to follow this project's Code of Conduct
Found the issue, when the label doesn't contain qoutes around true it fails.
goldilocks.fairwinds.com/enabled: true
Adding qoutes around "true" makes it work again:
goldilocks.fairwinds.com/enabled: "true"
Is it possible here to add a log message? Checks with kustomize build are also working, when no qoutes are set.
Is it possible here to add a log message?
It is not possible, the Kubernetes API drops these during sever-side apply without logging anything.