flux2 icon indicating copy to clipboard operation
flux2 copied to clipboard

Random failure of helm-controller to get last release revision

Open XtremeAI opened this issue 2 years ago • 12 comments

Describe the bug

Hi guys,

We run 20+ k8s clusters with workloads managed by Flux on them. Recently I observed that on three environments starting at different dates and times all the helm releases got stuck upgrading and Flux started to throw the following alert for each helm release:

helmrelease/<hr-name>.flux-system
reconciliation failed: failed to get last release revision: query: failed to query with labels: Unauthorized

The quick way to fix that was to bounce the helm-controller: k rollout restart deployment -n flux-system helm-controller. I had to fix all environments quickly as those were production ones.

Have you observed this problem before or have any ideas why this happens and what is more importantly how to prevent this from happening?

Steps to reproduce

N/A

Expected behavior

N/A

Screenshots and recordings

No response

OS / Distro

N/A

Flux version

13.3

Flux check

N/A

Git provider

No response

Container Registry provider

No response

Additional context

No response

Code of Conduct

  • [X] I agree to follow this project's Code of Conduct

XtremeAI avatar Nov 11 '21 08:11 XtremeAI

At first sight this looks like the helm-controller Pod lost access rights on some API resources. Could you check if anything around RBAC has changed at the time these failures started to happen?

makkes avatar Nov 13 '21 22:11 makkes

No, there were clearly no configuration changes. Cause if they were, a simple deployment restart would not help. But you are right, Unauthorized looks like helm-controller suddenly struggled to have access to something and apparently this error message comes from some helm operations.

XtremeAI avatar Nov 17 '21 07:11 XtremeAI

Same for me, helm-controller pod restart fixed the problem.

starteleport avatar Dec 08 '21 19:12 starteleport

Same here, fixed by restart

miph86 avatar Jan 10 '22 11:01 miph86

Seeing the same issue resolved by helm pod restart after months of uptime.

zmpeg avatar Jan 13 '22 18:01 zmpeg

At first sight this looks like the helm-controller Pod lost access rights on some API resources.

Seems that Helm can't list secrets to find the release storage, as if the helm-controller service account lost its privileges. But if that was the case, then all the other API queries should've failed before it reached the helm function.

Maybe these HelmReleases have spec.ServiceAccountName specified?

stefanprodan avatar Jan 13 '22 18:01 stefanprodan

We've just experienced the same issue, no changes to the RBAC for the cluster, and none of the helmreleases define a service account name. Very strange.

Alan01252 avatar Feb 16 '22 10:02 Alan01252

I'm not sure if this is cause/correlation but someone with some more experience might enlighten me. I restarted the helm controller as suggested by others here, and then we noticed that the certificate for our multus daemonset in our EKS cluster had expired preventing the controller from spinning up again.

Restarting the multus daemon set, regenerated the certs, the helm controller span back up, and everything was resolved.

Alan01252 avatar Feb 16 '22 10:02 Alan01252

At first sight this looks like the helm-controller Pod lost access rights on some API resources.

Seems that Helm can't list secrets to find the release storage, as if the helm-controller service account lost its privileges. But if that was the case, then all the other API queries should've failed before it reached the helm function.

Maybe these HelmReleases have spec.ServiceAccountName specified?

Page 540: https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf

You see these errors if your service account token has expired on a 1.21 or later cluster.

As mentioned in the Kubernetes 1.21 (p. 69) and 1.22 (p. 67) release notes, the BoundServiceAccount token feature that graduated to beta in 1.21 improves the security of service account tokens by allowing workloads running on Kubernetes to request JSON web tokens that are audience, time, and key bound. Service account tokens now have an expiration of one hour. To enable a smooth migration of clients to the newer time-bound service account tokens, Kubernetes adds an extended expiry period to the service account token over the default one hour. For Amazon EKS clusters, the extended expiry period is 90 days. Your Amazon EKS cluster's Kubernetes API server rejects requests with tokens older than 90 days.

Helm controller's pod was 91 days old when this problem happened. Restarting the pod and refreshing the service account's token did bring it back to normal.

alfoudari avatar May 04 '22 12:05 alfoudari

@abstractpaper this feels like an EKS bug, kubelet failed to renew the token and Flux ended up using one that has expired.

Can you please see the troubleshooting guide here: https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md#troubleshooting

stefanprodan avatar May 04 '22 12:05 stefanprodan

Same here, It fixed by restarting the pod after 110 days of uptime

migspedroso avatar Aug 01 '22 15:08 migspedroso

@migspedroso which version of Flux are you using? We fixed the stale token issue for helm-controller in v0.31

stefanprodan avatar Aug 01 '22 15:08 stefanprodan

I can confirm this issue is still present at:

flux: v0.33.0
helm-controller: v0.16.0
image-automation-controller: v0.20.0
image-reflector-controller: v0.16.0
kustomize-controller: v0.20.2
notification-controller: v0.21.0
source-controller: v0.21.2

Siebjee avatar Sep 28 '22 11:09 Siebjee

@Siebjee this has been fixed back in May in https://github.com/fluxcd/helm-controller/pull/480 You need to upgrade the Flux controllers.

stefanprodan avatar Sep 28 '22 11:09 stefanprodan

Heh, I think i missed that part on this cluster :D

Siebjee avatar Sep 28 '22 11:09 Siebjee

Had same issue with my cluster.

flux version
flux: v0.35.0
helm-controller: v0.20.1
kustomize-controller: v0.24.4
notification-controller: v0.23.4
source-controller: v0.24.3

Restart of the helm-controller pod resolved the issue.

MKnichal avatar Dec 22 '22 22:12 MKnichal

I thought to drop it here if someone finds this thread: If you are using flux multi-tenancy (spec.ServiceAccountName is defined in the Helmrelease), the helm-controller requires to have RW access to secrets in the namespace where the HelmRelease get's installed (as the Helm metadata is stored in a secret).

Cajga avatar Feb 19 '24 14:02 Cajga