terraform-provider-helm
terraform-provider-helm copied to clipboard
Charts.yaml is missing when installing metallb in terraform submodule
terraform:
Terraform v1.1.6
on linux_amd64
+ provider registry.terraform.io/bmatcuk/vagrant v4.1.0
+ provider registry.terraform.io/gavinbunney/kubectl v1.13.1
+ provider registry.terraform.io/hashicorp/external v2.2.0
+ provider registry.terraform.io/hashicorp/helm v2.4.1
+ provider registry.terraform.io/hashicorp/null v3.1.0
helm:
version.BuildInfo{Version:"v3.7.0", GitCommit:"eeac83883cb4014fe60267ec6373570374ce770b", GitTreeState:"clean", GoVersion:"go1.16.8"}
Kubernetes: 1.23
MetalLB:
downloading helm chart from https://metallb.github.io/metallb
.
Running on Ubuntu 20.04.
I am having a problem when trying to install MetalLB with Terraform and Helm. The error I am getting is Chart.yaml file is missing
.
I get this error when MetalLB configuration is in Terraform submodule. If I put the Terraform resources in the root module it runs fine. I can install another Helm chart from a submodule, tested with bitnami/apache.
My setup is as follows:
./main.tf
terraform {
required_providers {
kubectl = {
source = "gavinbunney/kubectl"
version = ">= 1.7.0"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
external = {
source = "hashicorp/external"
version = ">=2.2.0"
}
}
}
provider "kubectl" {
load_config_file = true
}
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
module "metallb" {
source = "./metallb"
}
./metallb/main.tf
terraform {
required_providers {
kubectl = {
source = "gavinbunney/kubectl"
version = ">= 1.7.0"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
external = {
source = "hashicorp/external"
version = ">=2.2.0"
}
}
}
provider "kubectl" {
load_config_file = true
}
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
resource "kubectl_manifest" "metallb_namespace" {
yaml_body = file("./metallb/namespace_metallb.yaml")
}
resource "helm_release" "metallb" {
name = "metallb"
repository = "https://metallb.github.io/metallb"
chart = "metallb"
timeout = 120
cleanup_on_fail = true
force_update = true
namespace = "metallb-system"
depends_on = [kubectl_manifest.metallb_namespace]
}
resource "kubectl_manifest" "metallb_configmap" {
yaml_body = file("./metallb/configmap_metallb.yaml")
depends_on = [helm_release.metallb]
}
./metallb/configmap_metallb.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.50.240-192.168.50.250
./metallb/namespace_metallb.yaml
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
Please let me know if more information is needed.
Interestingly enough, when I downloaded the latest main
from here and started it in debug mode it worked just as it should. Maybe something with dependency versions?
This also seems to happen for me with blocky
from the https://k8s-at-home.com/charts/
repo. Other charts from that repo install fine, but blocky throws this error. Putting it in the root module didn't help for me though. Same version of the helm provider though my Terraform version differs slightly
Terraform v1.1.7
on darwin_arm64
I have the same issue (or a very similar one) with prometheus
from the https://prometheus-community.github.io/helm-charts
repo, if I use an old version of the provider (1.3.2) I get Error: validation: chart.metadata is required
and if I upgrade to a more recent version ... I get Chart.yaml file is missing.
I had the same problem, it happens due to https://github.com/hashicorp/terraform-provider-helm/issues/735
so quick fix here will be:
module "metallb" {
source = "./metallb"
}
->
module "metallb" {
source = "./modules/metallb"
}
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!
I believe this is still outstanding
I may have found the cause, see this comment: https://github.com/hashicorp/terraform-provider-helm/issues/1215#issuecomment-1976722131