helm-operator
helm-operator copied to clipboard
helm-operator continually installs the same helm chart
Describe the bug
We are installing Kiali-server via flux helm-operator. There are no changes to the kiali manifests or chart but helm-operator detects a difference and installs the kiali helm chart over and over again as per the sync value. This has resulted in thousands of helm deployments for the same helm chart.
To Reproduce
flux manifest
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: kiali
spec:
releaseName: kiali
chart:
repository: https://kiali.org/helm-charts
name: kiali-server
version: 1.25.0
values:
auth:
strategy: token
deployment:
ingress_enabled: false
replicas: 3
view_only_mode: true
kubectl describe helmrelease kiali -n istio-system
Name: kiali
Namespace: istio-system
Labels: fluxcd.io/sync-gc-mark=sha256.fUmtmNyY60STNGvbi4AWtweJisQ9HdZTDGPCfgmeZ4s
Annotations: fluxcd.io/sync-checksum: 6537cb30aa7cfd600d178afc0863cc9eeff749ae
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"helm.fluxcd.io/v1","kind":"HelmRelease","metadata":{"annotations":{"fluxcd.io/sync-checksum":"6537cb30aa7cfd600d178afc0863c...
API Version: helm.fluxcd.io/v1
Kind: HelmRelease
Metadata:
Creation Timestamp: 2020-11-05T12:00:34Z
Generation: 1
Resource Version: 61154944
Self Link: /apis/helm.fluxcd.io/v1/namespaces/istio-system/helmreleases/kiali
UID: 4feb3755-f957-4e60-ac74-7c05705f5e6a
Spec:
Chart:
Name: kiali-server
Repository: https://kiali.org/helm-charts
Version: 1.25.0
Release Name: kiali
Values:
Auth:
Strategy: token
Deployment:
ingress_enabled: false
node_selector:
Node Type: mgmt
Replicas: 3
view_only_mode: true
Status:
Conditions:
Last Transition Time: 2020-11-05T12:00:34Z
Last Update Time: 2020-11-05T12:00:34Z
Message: Chart fetch was successful for Helm release 'kiali' in 'istio-system'.
Reason: ChartFetched
Status: True
Type: ChartFetched
Last Transition Time: 2020-11-06T10:18:55Z
Last Update Time: 2020-11-06T10:18:55Z
Message: Installation or upgrade succeeded for Helm release 'kiali' in 'istio-system'.
Reason: Deployed
Status: True
Type: Deployed
Last Transition Time: 2020-11-05T12:10:42Z
Last Update Time: 2020-11-06T10:18:56Z
Message: Release was successful for Helm release 'kiali' in 'istio-system'.
Reason: Succeeded
Status: True
Type: Released
Last Attempted Revision: 1.25.0
Observed Generation: 1
Phase: Succeeded
Release Name: kiali
Release Status: deployed
Revision: 1.25.0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ReleaseSynced 3m5s (x432 over 21h) helm-operator managed release 'kiali' in namespace 'istio-system' synchronized
Logs
{"caller":"helm.go:69","component":"helm","info":"performing update for kiali","release":"kiali","targetNamespace":"istio-system","ts":"2020-11-06T10:00:55.550113844Z","version":"v3"}
{"caller":"helm.go:69","component":"helm","info":"dry run for kiali","release":"kiali","targetNamespace":"istio-system","ts":"2020-11-06T10:00:55.656900076Z","version":"v3"}
{"caller":"release.go:303","component":"release","helmVersion":"v3","info":"difference detected during release comparison","phase":"dry-run-compare","release":"kiali","resource":"istio-system:helmrelease/kiali","targetNamespace":"istio-system","ts":"2020-11-06T10:00:56.060641089Z"}
{"action":"upgrade","caller":"release.go:353","component":"release","helmVersion":"v3","info":"running upgrade","release":"kiali","resource":"istio-system:helmrelease/kiali","targetNamespace":"istio-system","ts":"2020-11-06T10:00:56.060698897Z"}
{"caller":"helm.go:69","component":"helm","info":"preparing upgrade for kiali","release":"kiali","targetNamespace":"istio-system","ts":"2020-11-06T10:00:56.089728784Z","version":"v3"}
{"caller":"helm.go:69","component":"helm","info":"resetting values to the chart's original version","release":"kiali","targetNamespace":"istio-system","ts":"2020-11-06T10:00:56.106530037Z","version":"v3"}
{"caller":"helm.go:69","component":"helm","info":"performing update for kiali","release":"kiali","targetNamespace":"istio-system","ts":"2020-11-06T10:00:56.896154083Z","version":"v3"}
{"caller":"helm.go:69","component":"helm","info":"creating upgraded release for kiali","release":"kiali","targetNamespace":"istio-system","ts":"2020-11-06T10:00:57.091720321Z","version":"v3"}
{"caller":"helm.go:69","component":"helm","info":"checking 27 resources for changes","release":"kiali","targetNamespace":"istio-system","ts":"2020-11-06T10:00:57.417725844Z","version":"v3"}
{"caller":"helm.go:69","component":"helm","info":"Looks like there are no changes for ServiceAccount \"kiali\"","release":"kiali","targetNamespace":"istio-system","ts":"2020-11-06T10:00:57.430886037Z","version":"v3"}
{"caller":"helm.go:69","component":"helm","info":"Looks like there are no changes for ClusterRole \"kiali-viewer\"","release":"kiali","targetNamespace":"istio-system","ts":"2020-11-06T10:00:57.47432003Z","version":"v3"}
...
Detailed log snippet from helm-operator (after setting logReleaseDiff in helm-operator values.yaml)
="difference detected during release comparison" diff=" &helm.Chart{\n \tName: \"kiali-server\",\n \tVersion: \"1.26.0\",\n \tAppVersion: \"v1.26.0\",\n
\t\t\"istio_namespace\": string(\"\"),\n \t\t\"kiali_route_url\": string(\"\"),\n- \t\t\"login_token\":
map[string]interface{}{\"signing_key\": string(\"WlQ0a4dPfradWB8Q123\")},\n+ \t\t\"login_token\":
map[string]interface{}{\"signing_key\": string(\"xdqXEd33OKqGAGg4xx\")}
map[string]interface{}{\"signing_key\": string(\"9goVqx8mdvIXDF6B234\")},\n+ \t\t\"login_token\":
map[string]interface{}{\"signing_key\": string(\"O0QcjsTsgkXngay7111\")}
As shown in the log snippet, it appears that the login authentication method when using a Service Account creates a new signing key for the login_token. Is there any way we can get helm-operator to ignore this difference?
Additional context
helm-operator chart: 1.2.0 version: 1.2.0 kubernetes: EKS 1.16 kiali-server chart: 1.25.0 version: 1.25.0
We encountered the same issue when specifying a chart repository, but not when using git as the chart source.
I have the same problem with ingress-nginx chart , drone-runner-kube and cert-manager
hundreds of revisions are being created at the moment.
spec:
releaseName: ingress-nginx
chart:
repository: https://kubernetes.github.io/ingress-nginx
name: ingress-nginx
version: 3.15.2
spec:
releaseName: drone-runner-kube
chart:
git: https://github.com/drone/charts.git
ref: master
path: charts/drone-runner-kube
spec:
releaseName: cert-manager
chart:
repository: https://charts.jetstack.io
name: cert-manager
version: v1.1.0
it is also happening on a custom chart, going to enable log diff
the helm operator nginx logs doesn't say anything, just that the resources are being recreated, log attached flux-helm-operator-5f77c454b6-94nfn-1608403896342881000.log
I think it might have something to do with an old CRD perhaps? I uninstalled helm-operator, removed the CRD, performed a fresh install and I can't see the problem anymore.
the CRD was pretty old, while helm-operator has been kept up to date.
I can confirm that if you upgrade to 1.2.0 without also upgrading the CRDs, your charts will end up in a state where they continuously upgrade even when there is no diff (it somehow fails to update status.lastAttemptedRevision).
Sorry if your issue remains unresolved. The Helm Operator is in maintenance mode, we recommend everybody upgrades to Flux v2 and Helm Controller.
A new release of Helm Operator is out this week, 1.4.4.
We will continue to support Helm Operator in maintenance mode for an indefinite period of time, and eventually archive this repository.
Please be aware that Flux v2 has a vibrant and active developer community who are actively working through minor releases and delivering new features on the way to General Availability for Flux v2.
In the mean time, this repo will still be monitored, but support is basically limited to migration issues only. I will have to close many issues today without reading them all in detail because of time constraints. If your issue is very important, you are welcome to reopen it, but due to staleness of all issues at this point a new report is more likely to be in order. Please open another issue if you have unresolved problems that prevent your migration in the appropriate Flux v2 repo.
Helm Operator releases will continue as possible for a limited time, as a courtesy for those who still cannot migrate yet, but these are strongly not recommended for ongoing production use as our strict adherence to semver backward compatibility guarantees limit many dependencies and we can only upgrade them so far without breaking compatibility. So there are likely known CVEs that cannot be resolved.
We recommend upgrading to Flux v2 which is actively maintained ASAP.
I am going to go ahead and close every issue at once today, Thanks for participating in Helm Operator and Flux! 💚 💙