terraform-provider-helm
terraform-provider-helm copied to clipboard
helm_release version namespace attribute doesn't works
Terraform, Provider, Kubernetes and Helm Versions
Terraform version: latest
Provider version: 3.0.1
Kubernetes version: 1.32
Affected Resource(s)
- helm_release
Terraform Configuration Files
resource "helm_release" "release" {
name "releasename"
[...]
namespace = "namespace"
values = [...]
}
The issue that I'm experiencing, is related to namespace, some helm releases are using namespace from jenkins pod that is executing the terraform apply command
It looks like the new version will put everything into default if a templates don't specify a metadata.namespace, add
metadata:
namespace: {{ .Release.Namespace }}
to your templates, otherwise it will be in default (with versions <3 this was put into the selected namespace)
In our case, the helm releases themselves were installed into the selected namespace, but the resources were created in the default namespace. This is different than past releases.
I am also experiencing this issue, and it appears to have started after upgrading the AWS provider to version 6.0.0.
Hi @dkrizic, I think that this is not related to templates, there was release already installed on my cluster but now for some releases (not all) it was searching a namespace (the namespace where the Jenkins agent pod was running) and used it to install some apps.
So now I have applications installed multiple times on the original namespace and on "new" one, something like @lftroyer is reporting.
Yes @cw-juyeonyu, they released both at the same time, but I think that the AWS provider should be not involved just because the only warning that I had was related to data.aws_region where now name is deprecated, and you should use region instead. (another no-sense release).
On other projects, I see that some other resources were updated too...
Same issue here:
Upgrade failed: failed to create resource: namespaces "gitlab-runner" not found
gitlab-runner is the namespace of the gitlab runner pod executing terraform and not mentioned in the release or chart.
Yeah, same issue, and limiting helm provider to < 3.0.0 fixes that, so it is not a template, but a helm provider issue. Executing helm locally, even in newest version, also doesn't have that issue. The solution is definitively not to expect all helm charts to adapt to the helm provider
I've identified an issue with a Helm chart that includes a StatefulSet. When deploying the chart, the StatefulSet is consistently created in the default namespace, disregarding the namespace specified during deployment. This behavior seems specific to charts containing StatefulSets.
Rollback to < 3.0.0 version is not possible because the state is migrated from SDK to plugin (I don't remember the exact message)
I can confirm. Imo, this is critical as it completely screws up resources in the clusters when updating to >=3.0.0. deployments, secrets, configmaps are in the default namespace from all over the place and need to be cleaned up manually, as the resources in the default namespace are not under helm/terraforms control anymore.
I also have a project that's stuck because of this.
- Can't move forward because it tries to create a StatefulSet in the wrong namespace (default), and the pod can't run because it can't see the ConfigMap it wants to mount.
- Can't move backward because
The current state of helm_release.app was created by a newer provider version than is currently selected.
Please don't forget to add some unit tests to catch this kind of thing in the future. Thanks!
I can confirm this happens with version 3.0.0 & 3.0.1. Reverting back to version v2.17.0 works as expected.
I can confirm this happens with version 3.0.0 & 3.0.1. Reverting back to version v2.17.0 works as expected.
Reverting doesn't work if you already upgraded the state. As @Infinoid says and I can confrm.
Why don't you remove the all 3 versions? So people at least stop to update on ver 3. Add a WARNING at least, please do no update. At least new people will not have the issue. And, please fix it asap, because there is no way to rollback.
I can confirm this happens with version 3.0.0 & 3.0.1. Reverting back to version v2.17.0 works as expected.
Reverting doesn't work if you already upgraded the state. As @Infinoid says and I can confrm.
Reverting as in removing the existing state and re-importing. That worked for me.
Reverting as in removing the existing state and re-importing. That worked for me.
If you have few resources, maybe...if you have an entire cluster impacted by this, I think it is a really time-wasting operation.
I had the same issue and https://github.com/hashicorp/terraform-provider-helm/releases/tag/v3.0.2 resolved this issue for me
Hi @bitchecker We've just released a patch version v3.0.2 of the Helm Terraform Provider which includes a fix for this problem. We'd really appreciate it if you could try it out and confirm whether this resolves the issue for you. If you still encounter any problems, feel free to follow up here and we’ll take another look.
Thanks again for your patience and help improving the provider!
Hi @jaylonmcshan19-x, I'm just testing the fix right now.
For what can I see, all the helm resources are "impacted" by this, but just some deployment/statefulset are restarted (I don't know what triggers a rollout restart).
The bad situation is related to the resources that the previous apply created in a random namespace, should be removed manually and be careful on what are you doing! 😅
Resolved by #1650
Can confirm this still happens with version 3.0.2:
`* Failed to execute "terraform apply -auto-approve ../../../.*************workdir/.tf-plan" in ./.terragrunt-cache/ZXZZZZZZ-ZZZZZZZZ/ZZZZZZ/components/msk ╷ │ Error: Error upgrading chart │ │ with module.kafka_exporter[0].helm_release.kafka-exporter, │ on ../../k8s-kafka-exporter/helm.tf line 1, in resource "helm_release" "kafka-exporter": │ 1: resource "helm_release" "kafka-exporter" { │ │ Upgrade failed: unable to build kubernetes objects from current release │ manifest: resource mapping not found for name: │ "XXXXXXXXX-prometheus-kafka-exporter" namespace: "" from "": no │ matches for kind "PodSecurityPolicy" in version "policy/v1beta1" │ ensure CRDs are installed first ╵
exit status 1 `
versions during init:
17:24:24.328 STDOUT terraform: - Reusing previous version of hashicorp/helm from the dependency lock file 17:24:24.369 STDOUT terraform: - Installing hashicorp/kubernetes v2.37.1... 17:24:25.425 STDOUT terraform: - Installed hashicorp/kubernetes v2.37.1 (signed by HashiCorp) 17:24:25.450 STDOUT terraform: - Installing hashicorp/helm v3.0.2... 17:24:26.652 STDOUT terraform: - Installed hashicorp/helm v3.0.2 (signed by HashiCorp) 17:24:26.677 STDOUT terraform: - Installing hashicorp/aws v6.0.0... 17:24:36.922 STDOUT terraform: - Installed hashicorp/aws v6.0.0 (signed by HashiCorp)