terraform-provider-helm
terraform-provider-helm copied to clipboard
Deployment of HashiCorp Vault with helm_release fails with error: Chart.yaml file is missing.
Terraform, Provider, Kubernetes and Helm Versions
Terraform version: 1.5.1
Provider version: 2.10.1
Kubernetes version: 1.24.0
Affected Resource(s)
- helm_release
I am unable to deploy Helm with Terraform despite providing the correct repository URL and chart name, as per many of examples I've looked at. The Helm insists the Chart.yaml is missing, when it is not actually missing.
Terraform Configuration Files
resource "helm_release" "vault" {
name = "vault"
namespace = "vault"
repository = "https://helm.releases.hashicorp.com"
chart = "vault"
version = "0.25.0"
values = [templatefile("${path.module}/templates/override_values.yml.tpl", {
vault-ha-mode = var.vault-ha-mode
vault-server-replicas = var.vault-server-replicas
vault-server-mem-request = var.vault-server-mem-request
vault-server-cpu-request = var.vault-server-cpu-request
vault-server-mem-limit = var.vault-server-mem-limit
vault-server-cpu-limit = var.vault-server-cpu-limit
vault-auto-unseal-key-region = data.aws_region.current.name
vault-auto-unseal-key-id = data.aws_kms_key.vault-auto-unseal-key.id
vault-server-addrs = null_resource.vault-servers[*].triggers.names
})]
set {
name = "serviceAccount.create"
value = "false"
}
set {
name = "serviceAccount.name"
value = kubernetes_service_account.vault-service-account.metadata[0].name
}
}
Debug Output
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# module.vault.helm_release.vault will be created
+ resource "helm_release" "vault" {
+ atomic = false
+ chart = "vault"
+ cleanup_on_fail = false
+ create_namespace = false
+ dependency_update = false
+ disable_crd_hooks = false
+ disable_openapi_validation = false
+ disable_webhooks = false
+ force_update = false
+ id = (known after apply)
+ lint = false
+ manifest = (known after apply)
+ max_history = 0
+ metadata = (known after apply)
+ name = "vault"
+ namespace = "vault"
+ pass_credentials = false
+ recreate_pods = false
+ render_subchart_notes = true
+ replace = false
+ repository = "https://helm.releases.hashicorp.com"
+ reset_values = false
+ reuse_values = false
+ skip_crds = false
+ status = "deployed"
+ timeout = 300
+ values = [
+ <<-EOT
# Vault Helm Chart Value Overrides
global:
enabled: true
tlsDisable: false
injector:
enabled: true
# Use the Vault K8s Image https://github.com/hashicorp/vault-k8s/
image:
repository: "hashicorp/vault-k8s"
tag: "latest"
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 256Mi
cpu: 250m
server:
# These Resource Limits are in line with node requirements in the
# Vault Reference Architecture for a Small Cluster
resources:
requests:
memory: "2gi"
cpu: "1000m"
limits:
memory: "2gi"
cpu: "2000m"
# For HA configuration and because we need to manually init the vault,
# we need to define custom readiness/liveness Probe settings
readinessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
livenessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true"
initialDelaySeconds: 60
# extraEnvironmentVars is a list of extra environment variables to set with the stateful set. These could be
# used to include variables required for auto-unseal.
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/tls-ca/ca.crt
# extraVolumes is a list of extra volumes to mount. These will be exposed
# to Vault in the path `/vault/userconfig/<name>/`.
extraVolumes:
- type: secret
name: tls-server
- type: secret
name: tls-ca
- type: secret
name: kms-creds
# This configures the Vault Statefulset to create a PVC for audit logs.
# See https://www.vaultproject.io/docs/audit/index.html to know more
auditStorage:
enabled: true
standalone:
enabled: false
# Run Vault in "HA" mode.
ha:
enabled: "true"
replicas: "3"
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/tls-server/fullchain.pem"
tls_key_file = "/vault/userconfig/tls-server/server.key"
tls_client_ca_file = "/vault/userconfig/tls-server/client-auth-ca.pem"
}
storage "raft" {
path = "/vault/data"
retry_join {
leader_api_addr = "https://vault-0.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/server.crt"
leader_client_key_file = "/vault/userconfig/tls-server/server.key"
}
retry_join {
leader_api_addr = "https://vault-1.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/server.crt"
leader_client_key_file = "/vault/userconfig/tls-server/server.key"
}
retry_join {
leader_api_addr = "https://vault-2.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/server.crt"
leader_client_key_file = "/vault/userconfig/tls-server/server.key"
}
}
service_registration "kubernetes" {}
seal "awskms" {
region = "<censored>"
kms_key_id = "<censored>"
}
# Vault UI
ui:
enabled: true
serviceType: "LoadBalancer"
serviceNodePort: null
externalPort: 8200
# For Added Security, edit the below
#loadBalancerSourceRanges:
# - < Your IP RANGE Ex. 10.0.0.0/16 >
# - < YOUR SINGLE IP Ex. 1.78.23.3/32 >
EOT,
]
+ verify = false
+ version = "0.25.0"
+ wait = true
+ wait_for_jobs = false
+ set {
+ name = "serviceAccount.create"
+ value = "false"
}
+ set {
+ name = "serviceAccount.name"
+ value = "vault-sa"
}
}
Steps to Reproduce
- Set up Terraform with Kubernetes and Helm provider.
- Define deployment resource in Terraform specifying the vault parameters as defined in above output (you can probably drop all the special configurations included with
values
andset
, - Run
terraform apply
- Fail.
Expected Behavior
Terraform should deploy vault with helm_release.
Actual Behavior
Terraform helm_provider is failing with:
│ Error: could not download chart: Chart.yaml file is missing
│
│ with module.vault.helm_release.vault,
│ on vault/vault.tf line 31, in resource "helm_release" "vault":
│ 31: resource "helm_release" "vault" {
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Hi,
unfortunately, I'm not able to reproduce this... it works as expected when I try it.
resource "helm_release" "vault" {
name = "vault"
repository = "https://helm.releases.hashicorp.com"
chart = "vault"
version = "0.25.0"
}
➜ issue-helm-1215 terraform apply -auto-approve
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# helm_release.vault will be created
+ resource "helm_release" "vault" {
+ atomic = false
+ chart = "vault"
+ cleanup_on_fail = false
+ create_namespace = false
+ dependency_update = false
+ disable_crd_hooks = false
+ disable_openapi_validation = false
+ disable_webhooks = false
+ force_update = false
+ id = (known after apply)
+ lint = false
+ manifest = (known after apply)
+ max_history = 0
+ metadata = (known after apply)
+ name = "vault"
+ namespace = "default"
+ pass_credentials = false
+ recreate_pods = false
+ render_subchart_notes = true
+ replace = false
+ repository = "https://helm.releases.hashicorp.com"
+ reset_values = false
+ reuse_values = false
+ skip_crds = false
+ status = "deployed"
+ timeout = 300
+ verify = false
+ version = "0.25.0"
+ wait = true
+ wait_for_jobs = false
}
Plan: 1 to add, 0 to change, 0 to destroy.
helm_release.vault: Creating...
helm_release.vault: Creation complete after 9s [id=vault]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Same problem with grafana-agent
resource "helm_release" "grafana-agent-helm" {
name = "default-agent"
repository = "https://grafana.github.io/helm-charts"
chart = "grafana-agent"
}
@ctwilleager-alio Do you have a directory with the name vault
in your Terraform project directory? I had the same problem, after renaming it was no longer a problem.
Example to reproduce: main.tf:
resource "helm_release" "vault" {
name = "vault"
namespace = "vault"
repository = "https://helm.releases.hashicorp.com"
chart = "vault"
}
terraform init
mkdir vault
terraform apply -auto-approve
# Error: could not download chart: Chart.yaml file is missing
rmdir vault
terraform apply -auto-approve
# Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
@alexsomesan Can you please execute mkdir vault
before you execute terraform.
@alexsomesan Confirmed same problem here. This makes modules kind of annoying. I moved my longhorn
to a TF module under a directory called longhorn
because . . why wouldn't I name it that, and it broke. Took a while to find this.
Something else I forgot to mention, it does seem to be intermittent. I'm confident that the longhorn module was working with a directory called longhorn
before but then it stopped and would not start working again, so reproduction me be difficult.
This bug should very seriously be investigated and resolved. The module should not be assuming that just because there is a directory with values, we want to load a custom chart from it. I structure my folders in such a way that when i have a custom helm_resource, i pair it with a folder for that resource, makes it very easy to follow. There are obviously other ways, but the module shouldn't be looking for a directory unless it's told to.
As others have noticed, you can work around it by renaming the folder, in my case, I don't like that as it disrupts my structure and feels bloated, so you can also directly point at the .tgz in the chart uri.