terraform-provider-helm
terraform-provider-helm copied to clipboard
Helm upgrade on terraform
Hi All, we are using helm provider in terraform to provision our resources in AWS EKS cluster. This is working for the first time, but if we update the yaml file in the template and then we run terraform plan or apply it shows no infrastructure changes. It is not detecting the new changes in the yaml file.
Hi, could you provide us the code of the helm_release
that is having this issue? Thanks!
Hi, please find the code details below.
Hi, I have the same issue someone have found a workaround or any news on that ? @dhineshbabuelango @meyskens
I have an idea to manage this use case for local chart. Maybe we can to something like a hash of the chart struct and put it inside the tfstate. I did a poc it seems to work:
package main
import (
"github.com/mitchellh/hashstructure"
"fmt"
"k8s.io/helm/pkg/chartutil"
"k8s.io/helm/pkg/proto/hapi/chart"
)
func getHash(v interface{}) uint64 {
hash, err := hashstructure.Hash(v, nil)
if err != nil {
panic(err)
}
return hash
}
func getDeepHashChart(charts []*chart.Chart) uint64 {
var checkSum uint64
for _, c := range charts {
checkSum += getHashChart(c)
}
return checkSum
}
func getHashChart(c *chart.Chart) uint64 {
var checkSum uint64
if len(c.Dependencies) > 0 {
checkSum += getDeepHashChart(c.Dependencies)
}
checkSum += getHash(c.Values)
checkSum += getHash(c.Templates)
checkSum += getHash(c.Metadata)
checkSum += getHash(c.Files)
return checkSum
}
func main() {
//var checkSum uint64
c, err := chartutil.Load("/home/damien/k8s/charts/hivebrite")
if err != nil {
panic(err)
}
fmt.Println(getHashChart(c))
}
what do you think about that @meyskens ?
Hi @meyskens , I create a pr which check the local chart if it uses. I'm using in production and acceptance test are OK now.
Any news @meyskens ?
Closing this issue since is making reference to a version based on Helm 2, if this is still valid to the master branch please reopen it. Thanks.
This is still an issue with Helm 3 and this provider and should remain open. The title can perhaps be changed to be more descriptive; Terraform Helm provider does not create a diff for local chart changes.
I've gotten around this by incrementing the local chart version.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
I did a quick test and was able to reproduce this on the latest Helm provider version. Making changes to my local chart resulted in no Terraform diffs. However, I found I could get it working if I added manifest = true
to my Helm provider config. Here is my main.tf for reference:
terraform {
required_providers {
helm = {
source = "hashicorp/helm"
version = "2.1.1"
}
}
}
provider "helm" {
experiments {
manifest = true
}
kubernetes {
config_path = "~/.kube/config"
}
}
resource "helm_release" "test" {
wait = false
name = "test"
chart = "${path.module}/localchart"
namespace = "default"
dependency_update = true
values = [
file("${path.module}/localchart/values.yaml"),
]
}
Then when I changed something in ./localchart/templates/deployment.yaml
, the following diff was shown:
$ terraform plan
helm_release.test: Refreshing state... [id=test]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.test will be updated in-place
~ resource "helm_release" "test" {
id = "test"
~ manifest = jsonencode(
~ {
~ deployment.apps/apps/v1/test-localchart = {
~ spec = {
~ template = {
~ spec = {
~ containers = [
~ {
~ livenessProbe = {
~ httpGet = {
~ port = "http" -> "https"
# (1 unchanged element hidden)
}
}
# (7 unchanged elements hidden)
},
]
# (2 unchanged elements hidden)
}
# (1 unchanged element hidden)
}
# (2 unchanged elements hidden)
}
# (3 unchanged elements hidden)
}
# (11 unchanged elements hidden)
}
)
name = "test"
# (25 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
@dak1n1 i was having the same exact issue and adding manifest=true fixed it for me also , thank you!
Is there a way to restrict this behaviour to only local charts? Using this experimental feature causes helm to try to upgrade other charts that I don't want it to do...
Also is this feature documented anywhere?
The experiments block is documented here: https://registry.terraform.io/providers/hashicorp/helm/latest/docs#experiments. It's a provider-level configuration, which means it will store the manifest for all helm charts managed by that provider.
However, you could create an alias, which gives you the ability to have multiple configurations for the helm provider. https://www.terraform.io/docs/language/providers/configuration.html#alias-multiple-provider-configurations
So you'd have one provider configuration without experiments {}
and another provider configuration with experiments {}
. That would allow you to choose which resources use this feature, by adding provider = helm.your-alias-name
in the resource {} block.
I'm also experiencing this issue and came across the experiments
block. When trying to add it to my provider config, I don't see any change in behavior.
Am I doing it wrong? here's my config:
provider "helm" {
kubernetes { ... }
experiments {
manifest = true
}
}
P.S - I'm using a local chart. maybe that's why it doesn't work? 😢
Any news about this? If you use the manifest option inside a pipeline the output of the manifests could be huge and breaks the pipeline. It would be great if it works as a normal way.
It would be great to have this option at the helm_release
resource level. This underlying issue shows up with other features like postrender : https://github.com/hashicorp/terraform-provider-helm/issues/675
Is there a way to restrict this behaviour to only local charts? Using this experimental feature causes helm to try to upgrade other charts that I don't want it to do...
@josh-gree @ki0 Why does adding this flag change behavior for charts where you don't need the feature. Meaning, I have one chart where I need this feature so that postrender sees changes. How will this affect the other charts? Will it cause more diffs/updates and why?
A workaround I have found is updating the chart version in the chart.yaml, then terraform recognises there's an update to make. Note this is only needed for local charts that you have saved inside your repo.
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!