terraform-provider-helm
terraform-provider-helm copied to clipboard
Ability to designate which provider to use on a helm_resource
Description
Terraform is configuring multiple kubernetes clusters with a single apply and each cluster would require their own helm provider (and kube config) to deploy a helm_resource to each cluster.
Potential Terraform Configuration
provider "helm" {
count = length(var.clusters)
alias = var.clusters[count.index].cluster_name
kubernetes {
config_path = local_file.kube_cluster_yaml[count.index].filename
}
}
resource "helm_release" "fluentd" {
count = length(var.clusters)
provider = helm[var.clusters[count.index].cluster_name]
name = "fluentd"
chart = "fluentd"
repository = "https://kubernetes-charts.storage.googleapis.com/ "
namespace = "fluentd"
create_namespace = true
}
References
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
I am thinking this is actually a bug..
Given this terraform:
## Cluster 1
module "cluster1" {
source = "../cluster-module"
providers = {
helm = helm.cluster1
kubernetes = kubernetes.cluster1
}
}
provider "kubernetes" {
alias = "cluster1"
load_config_file = false
host = module.cluster1.eks_cluster_host
token = module.cluster1.eks_cluster_token
cluster_ca_certificate = module.cluster1.eks_cluster_ca_certificate
}
provider "helm" {
alias = "cluster1"
kubernetes {
load_config_file = false
host = module.cluster1.eks_cluster_host
token = module.cluster1.eks_cluster_token
cluster_ca_certificate = module.cluster1.eks_cluster_ca_certificate
}
}
## Cluster 2
module "cluster2" {
source = "../cluster-module"
providers = {
helm = helm.cluster2
kubernetes = kubernetes.cluster2
}
}
provider "kubernetes" {
alias = "cluster2"
load_config_file = false
host = module.cluster2.eks_cluster_host
token = module.cluster2.eks_cluster_token
cluster_ca_certificate = module.cluster2.eks_cluster_ca_certificate
}
provider "helm" {
alias = "cluster2"
kubernetes {
load_config_file = false
host = module.cluster2.eks_cluster_host
token = module.cluster2.eks_cluster_token
cluster_ca_certificate = module.cluster2.eks_cluster_ca_certificate
}
}
When an alias is used, it seems like helm provider is refreshing the state with the correct provider [and plans to create releases], but when it attempts to apply it uses the wrong provider and can fail if the releases already exist in the other cluster.
# Cluster1 helm releases
$ helm ls --all-namespaces -q
aws-node-termination-handler
cluster-autoscaler
metrics-server
# Cluster2 helm releases
$ helm ls --all-namespaces -q
## [there are none]
Terraform refreshes and plans to create those same releases in cluster 2 because they don't exist, but when it attempts to apply it fails with the following error:
Error: cannot re-use a name that is still in use
on .terraform/modules/cluster2.cluster_autoscaler/helm_release.tf line 1, in resource "helm_release" "release":
1: resource "helm_release" "release" {
The release exists in cluster1, but not cluster2, which makes me believe that the helm provider is attempting to talk to the wrong cluster even though we have declared the provider for the resource [in our case a module resource, but likely also any other helm_* resource as well]
Can anyone confirm if this is still a problem?
Seeing same issue
Based on the description in https://github.com/hashicorp/terraform-provider-helm/issues/539#issuecomment-697828128 I think I am hitting this error too. The use case is the exact same - terraform code that is doing a helm install on 2 different clusters using separate helm and kubernetes provider blocks with different alias names. However, the error I am seeing is:
error: You must be logged in to the server (Unauthorized)
Is helm attempting to talk to the wrong cluster on apply?
This seems quite a relevant feature to have, especially as it's almost always abused for dynamic resource generation. Any updates on this?
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!