terraform-provider-helm
terraform-provider-helm copied to clipboard
Timeout not taken into account in helm_release resource
So, I'm trying to create an Elasticsearch cluster using the ECK operator. I'm able to install the operator properly (without helm). I created a local chart to help me create the elasticsearch cluster. When I apply it using terraform, it fails because of a 30 seconds timeout. It's expected because the ECK documentation states :
kubectl --request-timeout=1m apply -f elasticsearch.yaml
But it's seems impossible to reproduce such timeout configuration using the Helm provider 😢
Terraform Version and Provider Version
Terraform v0.12.23
Provider Version
1.1
Affected Resource(s)
- helm_release
Terraform Configuration Files
provider "helm" {
version = "~> v1.1"
debug = true
kubernetes {
load_config_file = false
host = "https://${var.k8s_cluster_endpoint}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = var.k8s_cluster_ca_cert
}
}
resource "helm_release" "elasticsearch" {
name = "elasticsearch"
chart = "./elasticsearch"
namespace = "elasticsearch"
timeout = 600000
wait = true
}
Debug Output
https://gist.github.com/jeromepin/8389eeaa21e9ed516a5e773fba54adfc
Expected Behavior
timeout should be taken into account OR another parameter should be created like --request-timeout
The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.
Actual Behavior
The client times out at 30 seconds.
Steps to Reproduce
terraform apply
Important Factoids
Running in GKE but I don't think it's relevant here.
have the same issue in aks with timeout not taken into account (Terraform v0.12.24 and helm provider 1.2.0)
I have the same issue with Terraform v0.12.26 and Helm provider v1.2.2 It doesn't even take the default of 300 seconds, it timeouts at 30 seconds. For me at least seems to be an intermittent problem. Also using GKE
Update: I found the app I was deploying with Helm was getting a timeout from a remote gateway. After I fix the remote gateway issues the Helm provided is working as expected. However, I do still believe the provider should honor the defined timeout setting regardless if it's getting a timeout elsewhere.
I also had this issue too.
wait = true
timeout = 300
is not obeyed
Having the same issue using the latest helm provider with TF13.
I have added 2 tests and i can't reproduce the error. Can you give us the pod describes file when the helm ends?
@sebglon - here's where I bumped into this problem. My initial thought was that this was because they had an extensive _tests.tpl or similar which was waiting for all the resources to be happy but that's not the case. Fire up a GKE cluster, grab your kubeconfig, and do something like this...
provider "helm" {
kubernetes {
config_path = "./${path.module}/kubeconfig.yaml"
load_config_file = true
}
}
resource "helm_release" "cilium" {
name = "cilium"
chart = "https://github.com/cilium/charts/raw/master/cilium-1.8.5.tgz"
namespace = "cilium"
create_namespace = true
# None of these seem to work?
# wait = false
# timeout = 3600
#
# This resource doesn't support this. If it times out, run the pipeline again.
#timeouts {
# create = "60m"
# delete = "60m"
#}
set {
name = "global.hubble.metrics.enabled"
value = "{dns,drop,tcp,flow,port-distribution,icmp,http}"
}
set {
name = "global.hubble.enabled"
value = "true"
}
set {
name = "nodeinit.restartPods"
value = "true"
}
set {
name = "global.nativeRoutingCIDR"
value = google_container_cluster.default.cluster_ipv4_cidr
}
set {
name = "config.ipam"
value = "kubernetes"
}
set {
name = "global.gke.enabled"
value = "true"
}
set {
name = "global.cni.binPath"
value = "/home/kubernetes/bin"
}
set {
name = "nodeinit.removeCbrBridge"
value = "true"
}
set {
name = "nodeinit.reconfigureKubelet"
value = "true"
}
set {
name = "global.nodeinit.enabled"
value = "true"
}
}
Any update on this?
We are also facing the same issue.
Oh lord it's been two years since I commented on this but yeah, I got the email.
Since this has been opened, Dataplane V2 is cilium. Google has been cagy about if they're including the cilium CRDs looking forward but for the specific use case of "how do I install cilium with helm using TF", enabling dataplane v2 in your GKE cluster solves that specific problem.
Given how old this is, it's probably worth opening up a new issue and referencing this one. Any code or examples in this issue are certainly too old.
same ...
It works for me though.
helm version
version.BuildInfo{Version:"v3.7.2", GitCommit:"663a896f4a815053445eec4153677ddc24a0a361", GitTreeState:"clean", GoVersion:"go1.16.10"}
EKS version (Kubernetes) v1.23
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!
unstale