flux2
flux2 copied to clipboard
Random failure of helm-controller to get last release revision
Describe the bug
Hi guys,
We run 20+ k8s clusters with workloads managed by Flux on them. Recently I observed that on three environments starting at different dates and times all the helm releases got stuck upgrading and Flux started to throw the following alert for each helm release:
helmrelease/<hr-name>.flux-system
reconciliation failed: failed to get last release revision: query: failed to query with labels: Unauthorized
The quick way to fix that was to bounce the helm-controller: k rollout restart deployment -n flux-system helm-controller
. I had to fix all environments quickly as those were production ones.
Have you observed this problem before or have any ideas why this happens and what is more importantly how to prevent this from happening?
Steps to reproduce
N/A
Expected behavior
N/A
Screenshots and recordings
No response
OS / Distro
N/A
Flux version
13.3
Flux check
N/A
Git provider
No response
Container Registry provider
No response
Additional context
No response
Code of Conduct
- [X] I agree to follow this project's Code of Conduct
At first sight this looks like the helm-controller Pod lost access rights on some API resources. Could you check if anything around RBAC has changed at the time these failures started to happen?
No, there were clearly no configuration changes. Cause if they were, a simple deployment restart would not help. But you are right, Unauthorized
looks like helm-controller suddenly struggled to have access to something and apparently this error message comes from some helm operations.
Same for me, helm-controller
pod restart fixed the problem.
Same here, fixed by restart
Seeing the same issue resolved by helm pod restart after months of uptime.
At first sight this looks like the helm-controller Pod lost access rights on some API resources.
Seems that Helm can't list secrets to find the release storage, as if the helm-controller service account lost its privileges. But if that was the case, then all the other API queries should've failed before it reached the helm function.
Maybe these HelmReleases have spec.ServiceAccountName
specified?
We've just experienced the same issue, no changes to the RBAC for the cluster, and none of the helmreleases define a service account name. Very strange.
I'm not sure if this is cause/correlation but someone with some more experience might enlighten me. I restarted the helm controller as suggested by others here, and then we noticed that the certificate for our multus daemonset in our EKS cluster had expired preventing the controller from spinning up again.
Restarting the multus daemon set, regenerated the certs, the helm controller span back up, and everything was resolved.
At first sight this looks like the helm-controller Pod lost access rights on some API resources.
Seems that Helm can't list secrets to find the release storage, as if the helm-controller service account lost its privileges. But if that was the case, then all the other API queries should've failed before it reached the helm function.
Maybe these HelmReleases have
spec.ServiceAccountName
specified?
Page 540: https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf
You see these errors if your service account token has expired on a 1.21 or later cluster.
As mentioned in the Kubernetes 1.21 (p. 69) and 1.22 (p. 67) release notes, the BoundServiceAccount token feature that graduated to beta in 1.21 improves the security of service account tokens by allowing workloads running on Kubernetes to request JSON web tokens that are audience, time, and key bound. Service account tokens now have an expiration of one hour. To enable a smooth migration of clients to the newer time-bound service account tokens, Kubernetes adds an extended expiry period to the service account token over the default one hour. For Amazon EKS clusters, the extended expiry period is 90 days. Your Amazon EKS cluster's Kubernetes API server rejects requests with tokens older than 90 days.
Helm controller's pod was 91 days old when this problem happened. Restarting the pod and refreshing the service account's token did bring it back to normal.
@abstractpaper this feels like an EKS bug, kubelet failed to renew the token and Flux ended up using one that has expired.
Can you please see the troubleshooting guide here: https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md#troubleshooting
Same here, It fixed by restarting the pod after 110 days of uptime
@migspedroso which version of Flux are you using? We fixed the stale token issue for helm-controller in v0.31
I can confirm this issue is still present at:
flux: v0.33.0
helm-controller: v0.16.0
image-automation-controller: v0.20.0
image-reflector-controller: v0.16.0
kustomize-controller: v0.20.2
notification-controller: v0.21.0
source-controller: v0.21.2
@Siebjee this has been fixed back in May in https://github.com/fluxcd/helm-controller/pull/480 You need to upgrade the Flux controllers.
Heh, I think i missed that part on this cluster :D
Had same issue with my cluster.
flux version
flux: v0.35.0
helm-controller: v0.20.1
kustomize-controller: v0.24.4
notification-controller: v0.23.4
source-controller: v0.24.3
Restart of the helm-controller pod resolved the issue.
I thought to drop it here if someone finds this thread:
If you are using flux multi-tenancy (spec.ServiceAccountName
is defined in the Helmrelease
), the helm-controller requires to have RW access to secrets in the namespace where the HelmRelease
get's installed (as the Helm metadata is stored in a secret).