terraform-provider-helm
terraform-provider-helm copied to clipboard
Values modified outside of terraform not detected as changes
Terraform Version
Terraform v0.12.12
Helm provider Version
~> 0.10
Affected Resource(s)
- helm_resource
Terraform Configuration Files
resource "helm_release" "service" {
name = "service"
chart = "service"
version = "0.1.7"
repository = module.k8s.helm_repository_name
set {
name = "image.tag"
value = "latest"
}
}
Expected Behaviour
A diff should be detected if settings of the release are modified outside of Terraform.
Actual Behavior
The helm provider does not detect changes to the release done outside of Terraform.
Steps to Reproduce
-
terraform apply
$ helm get values service image: tag: latest # <-- Value as set in terraform
-
helm upgrade service service --reuse-values --set image.tag=test
$ helm get values service image: tag: test # <-- Value in the deployed release changed
-
terraform apply
(Should detect the change done on the release when refreshing the state)... helm_release.service: Refreshing state... [id=service] ... Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Just ran into this, and it is very annoying. For a workaround, I did this:
set {
name = "valuesChecksum"
value = filemd5("${path.module}/values-production.yaml")
}
If edit resources created by helm directly, they also will be skipped, because values/release file not changed
+1 I am also facing both problems described above (by @Nefelim4ag and @thomas-brx )
Just ran into this, and it is very annoying. For a workaround, I did this:
set { name = "valuesChecksum" value = filemd5("${path.module}/values-production.yaml") }
Hey @ianks , could you elaborate more how this workaround works? I am guessing filemd5("${path.module}/values-production.yaml")
will be always the same and won't change if someones modifies values directly in k8s helm release.
I also encountered this problem. Seems like a basic requirement terraform should be able to handle. It's a really serious bug.
To reproduce apply the following:
provider "helm" {
kubernetes {
config_context_cluster = "minikube"
config_path = "~/.kube/config"
}
}
resource "helm_release" "my-helm-mongo" {
name = "my-mongodb"
repository = "https://charts.bitnami.com/bitnami"
chart = "mongodb"
}
Then use kubectl to remove either the service or deployment.
Next use terraform to check the state
foo@bar:~$ terraform refresh
foo@bar:~$ terraform plan -out myplan
foo@bar:~$ terraform apply ./myplan
There is no plan applied...
Is anyone looking into this? If I use the helm provider to deploy a chart, that works fine, but when adding a yaml file in the Chart (to the templates), the helm provider does not pick up that a file has been added when re-running terraform. How can we force Terraform to pick up additions/modifications in the chart??
Is anyone looking into this? If I use the helm provider to deploy a chart, that works fine, but when adding a yaml file in the Chart (to the templates), the helm provider does not pick up that a file has been added when re-running terraform. How can we force Terraform to pick up additions/modifications in the chart??
Our team just ran into this yesterday. It seems like the issue mentioned in the title of this thread suggests that it's only values that have been adjusted outside the terraform context, but what you're describing has been our experience–the initial deployment of charts with helm with values file seems to work, but even with reuse_values = true
, I haven't seen a helm_release
pick up on any changes yet.
I just ran into this same issue. Anyone one has a workaround to detect direct changes to the chart?
@drexler We currently use this workaround in our project.
First, we create a file hash across all yaml files in the chart directory (set in variable var.chart_path
)...
locals {
# This hash forces Terraform to redeploy if a new template file is added or changed, or values are updated
chart_hash = sha1(join("", [for f in fileset(var.chart_path, "**/*.yaml"): filesha1("${var.chart_path}/${f}")]))
}
... and then add this hash as a value in the helm_release resource:
# used to force update for changes in the chart
set {
name = "chart-hash"
value = local.chart_hash
}
Hope this helps :)
Edit:
Oh, and we added reset_values = true
in the helm release resource as well, so far that combination has worked quite nicely.
Thanks @lukli11. One interesting thing i found reading the code is that the provider seems to have implemented this functionality as an experimental feature. I'll definitely use your workaround with my project and give the experimental feature a try and see if it compares to it. Thanks for sharing.
Code reference: https://github.com/hashicorp/terraform-provider-helm/blob/main/helm/resource_release.go#L738-L818 Reference: https://registry.terraform.io/providers/hashicorp/helm/latest/docs#experiments
The experimental manifest feature didn't work as expected but @lukli11 workaround is useful for detecting chart changes. Slick hack! 💯
I'd also like to "bump" this issue because it's also impacting us as well.
Probably somewhat related to Terraform state updated with new Chart values after apply errored. It seems this resource will not refresh actual values, and instead just use whatever is in the state.
Also ran into this issue when modifying a template. An easy workaround for this issue is to increase the version in Chart.yaml
, which will update the terraform resource.
Actually, this is probably a good idea anyway when adding / modifying templates.
I have the same problem and I see issue #382 also describes the same annoying problem but nobody ever answered so it was closed. Could you please help @alexsomesan @BBBmau ?
Found the same issue, when manually making changes in helm chart one workaround would be to add a comment in values.yaml(if you have access to) and trigger apply it will pickup
We found this issue as well. Is there any plan to allow Terraform to override any changes applied outside of the helm_release resource?
For anyone looking at a super simple solution without computing hashes and etc., see this - https://github.com/hashicorp/terraform-provider-helm/issues/821#issuecomment-1017623574
I ran into this issue today. Adding a checksum for the values.yaml
file was my workaround:
set {
name = "valuesChecksum"
value = filemd5("${path.module}/values-production.yaml")
}
I've found today it's not detecting if a service was deleted. Not sure if that's helm issue though.