changing releaseName of an existing HelmRelease will cause duplicate deploys
Describe the bug
Changing the spec.eleaseName of a HelmRelease CRD while keeping the same metadata.name for the HelmRelease object will deploy the chart under the new releaseName, but not delete the release with the previous releaseName.
To Reproduce Steps to reproduce the behaviour: 0. What's your setup?
Helm-operator 1.0.1, kubernetes 1.16.8 on EKS configured to use only Helm v3
- Deploy a HelmRelease object like this:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: airflow
spec:
...
There is no releaseName in the object above, so the Helm-operator names the release airflow-airflow.
- Change the HelmRelease object by adding a spec.releaseName value:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: airflow
spec:
releaseName: airflow
...
- Observe that there are now two helm releases in the namespace -- one for the old releaseName (autogenerated) and one for the new.
➜ kubectl -n airflow get helmrelease
NAME RELEASE PHASE STATUS MESSAGE AGE
airflow airflow Succeeded deployed Release was successful for Helm release 'airflow' in 'airflow'. 84m
➜ helm -n airflow list --all
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
airflow airflow 1 2020-05-13 18:56:17.485542152 +0000 UTC deployed airflow-6.7.2 1.10.4
airflow-airflow airflow 4 2020-05-13 17:49:27.697350087 +0000 UTC deployed airflow-6.7.2 1.10.4
Expected behavior
The airflow-airflow release would first be deleted. Then the airflow release would be deployed.
Logs
helm-operator-8b6549596-q4mgb flux-helm-operator ts=2020-05-13T18:56:17.312814848Z caller=release.go:75 component=release release=airflow targetNamespace=airflow resource=airflow:helmrelease/airflow helmVersion=v3 info="starting sync run"
helm-operator-8b6549596-q4mgb flux-helm-operator ts=2020-05-13T18:56:17.322442069Z caller=release.go:270 component=release release=airflow targetNamespace=airflow resource=airflow:helmrelease/airflow helmVersion=v3 info="running installation" phase=install
helm-operator-8b6549596-q4mgb flux-helm-operator ts=2020-05-13T18:56:17.870571579Z caller=helm.go:69 component=helm version=v3 info="creating 27 resource(s)" targetNamespace=airflow release=airflow
helm-operator-8b6549596-q4mgb flux-helm-operator ts=2020-05-13T18:56:18.072759723Z caller=release.go:279 component=release release=airflow targetNamespace=airflow resource=airflow:helmrelease/airflow helmVersion=v3 info="installation succeeded" revision=6.7.2 phase=install
Prior to this it was giving the usual sync messages about the airflow-airflow release, and then after, it only talked about airflow.
Additional context
- Helm Operator version: 1.0.1
- Targeted Helm version: v3
- Kubernetes version: 1.16.8
- Git provider: Gitlab
- Container registry provider: Gitlab
Deleting releases is dangerous, since e.g. changing the relesaeName could be accidental. Maybe an error could be thrown by checking the status object and seeing that there is already an existing release with a different releaseName or targetNamespace for this HelmRelease. But on the other hand that may be a valid use case when migrating releases to new homes.
But I think you could also just change metadata.name/namespace instead of spec.releaseName/targetNamespace and rely on e.g. kubectl apply --prune to delete the old HelmRelease?
I'm not sure there's an easy right answer, but it was surprising behavior.
If I'd renamed the HelmRelease object (changed metadata.name) as well as changing the spec.releaseName, things would have worked fine. As you say, flux would have noticed the absence of the old object and deleted it. I didn't want everything named airflow-airflow-$whatever, so I just thought "change the releaseName". It didn't occur to me to also change the name of the HelmRelease object itself.
Ideally releaseName would be an immutable field, as the actual created helm releases themselves cannot be renamed. Analogous to the selector of a Deployment being immutable. In both cases, changing the contents of the field will break the reference to the controller's managed object.
There appears to be a KEP for supporting immutable fields in CRDs, but until that lands, it might require a validating webhook to effect that, which is probably more trouble than it's worth.
Perhaps there should just be a warning - "don't change spec.releaseName without also changing the HelmRelease's metadata.name/namespace"
How do I delete the duplicated release, there is no HelmRelease object for it anymore, because only the new release name is shown in the HelmRelease.
Solved it by:
- change
releaseNameto the old name - delete
HelmRelease - added
HelmReleasewith newreleaseName
After every step I had to commit and deploy it on the cluster.
Sorry if your issue remains unresolved. The Helm Operator is in maintenance mode, we recommend everybody upgrades to Flux v2 and Helm Controller.
A new release of Helm Operator is out this week, 1.4.4.
We will continue to support Helm Operator in maintenance mode for an indefinite period of time, and eventually archive this repository.
Please be aware that Flux v2 has a vibrant and active developer community who are actively working through minor releases and delivering new features on the way to General Availability for Flux v2.
In the mean time, this repo will still be monitored, but support is basically limited to migration issues only. I will have to close many issues today without reading them all in detail because of time constraints. If your issue is very important, you are welcome to reopen it, but due to staleness of all issues at this point a new report is more likely to be in order. Please open another issue if you have unresolved problems that prevent your migration in the appropriate Flux v2 repo.
Helm Operator releases will continue as possible for a limited time, as a courtesy for those who still cannot migrate yet, but these are strongly not recommended for ongoing production use as our strict adherence to semver backward compatibility guarantees limit many dependencies and we can only upgrade them so far without breaking compatibility. So there are likely known CVEs that cannot be resolved.
We recommend upgrading to Flux v2 which is actively maintained ASAP.
I am going to go ahead and close every issue at once today, Thanks for participating in Helm Operator and Flux! 💚 💙