terraform-provider-helm icon indicating copy to clipboard operation
terraform-provider-helm copied to clipboard

First helm release always succeeds and doesn't wait for all pods running

Open FischlerA opened this issue 3 years ago • 25 comments

Terraform, Provider, Kubernetes and Helm Versions

Terraform version: 0.14.4 Provider version: 2.0.2 Kubernetes version: AWS EKS 1.18 Helm version: 3

Affected Resource(s)

  • helm_release

Debug Output

https://gist.github.com/FischlerA/7930aff18d68a7b133ff22aadc021517

Steps to Reproduce

  1. terraform apply

Expected Behavior

The helm deployment should fail since the pod that is being deployed is running an image that will always fail. (private image which i can't share)

Actual Behavior

The first time the helm release is deployed it always succeeds after reaching the timeout (5 min), any further deployments fail as they are supposed to after reaching the timeout (5min).

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

FischlerA avatar Jan 26 '21 08:01 FischlerA

Thanks for opening @FischlerA. Did you try using the wait attribute? By default helm will not wait for all pods to become ready, just create the API resources.

jrhouston avatar Jan 27 '21 18:01 jrhouston

Thanks for opening @FischlerA. Did you try using the wait attribute? By default helm will not wait for all pods to become ready, just create the API resources.

Per documentation the wait attributed defaults to true. But even after explicitly setting it to true the behavior didn't change and it was still seen as success with a crashing pod.

FischlerA avatar Jan 28 '21 06:01 FischlerA

Ah yep, you're right – I will try and reproduce this.

The provider itself doesn't do the waiting, it just passes along the wait flag to the install action in the helm package. Do you get the same issue if you do a helm install --wait with your chart?

jrhouston avatar Jan 28 '21 07:01 jrhouston

@jrhouston
We deployed the chart again by using helm directly with helm install --wait and the behaviour was as expected: After waiting for five minutes, we've got an error-message Error: timed out waiting for the condition.

isabellemartin2 avatar Feb 01 '21 06:02 isabellemartin2

I had same experiences when I use helm_release in terraform, if something goes wrong, pod status is stay at "pending" or "Error", "CreateContainer" or some other unusual status for a little longer time, Helm terraform provider will not wait until pods are running, it will exit and reported completed, However terraform state was update as failed.

dinandr avatar Feb 11 '21 14:02 dinandr

Saw the same behavior today when I deployed ingress-nginx and the very first job failed because it was rejected by another webhook. The terraform apply run waited for 5 minutes but reported a success, even though not a single resource was created successful. In fact only 1 job was there it was rejected.

whiskeysierra avatar Feb 19 '21 19:02 whiskeysierra

@jrhouston were you able to take a look at this?

FischlerA avatar Mar 22 '21 06:03 FischlerA

I'm running into this too. I pretty regularly have a successful terraform apply (everything shows successful and complete) and end up with helm_release resources that show ~ status = "failed" -> "deployed" on a second run.

jgreat avatar Mar 24 '21 21:03 jgreat

I think we are hitting this as well but not entirely sure.. we are seeing helm_release pass on first run with (wait = true) where not all the pods come online because of a Gatekeeper/PSP we have in the cluster, we are not sure how get our helm_release to fail in that case

loreleimccollum-work avatar Mar 25 '21 19:03 loreleimccollum-work

Hi all. I'm new to Terraform. I've had to split up my Terraform deployments and include a time_sleep because of this issue. Looking forward to an update here.

thecb4 avatar Apr 12 '21 22:04 thecb4

Same thing with helm job and wait_for_jobs = true. It will wait the timeout and then will return true. If i reapply I got the following :

$ terraform apply -var image_tag=dev-ed4854d
helm_release.job_helm_release: Refreshing state... [id=api-migration]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # module.job_helm_release.helm_release.job_helm_release will be updated in-place
  ~ resource "helm_release" "job_helm_release" {
        id                         = "api-migration"
        name                       = "api-migration"
      ~ status                     = "failed" -> "deployed"
        # (24 unchanged attributes hidden)


        # (22 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

descampsk avatar May 20 '21 16:05 descampsk

I faced this issue, helm-release 'timeout' options seem not working, helm-relese stated as "successfully completed" with in 5 seconds , even though PODs are init stage.

vinothkumarsubs2019 avatar Sep 07 '21 05:09 vinothkumarsubs2019

me too . pod status is stay at "pending" when I use helm_release in terraform, but it worked well with Helm cli. Error: release nginx failed, and has been uninstalled due to atomic being set: timed out waiting for the condition

Monsterlin2018 avatar Dec 17 '21 09:12 Monsterlin2018

I don't know what happened,but it back to normal work。In the past 6 hours, I upgraded kubernetes to 1.23.1.

resource "helm_release" "traefik" {
  name       = "traefik"
  repository = "https://helm.traefik.io/traefik"
  chart      = "traefik"
  version    = "10.3.2"
  
 #I just tried to add this line
  wait = false
}

Versions :

bash-5.1# terraform version
Terraform v1.0.9
on linux_amd64
+ provider registry.terraform.io/hashicorp/helm v2.4.1

# kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:41:01Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:34:54Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}

# helm version
version.BuildInfo{Version:"v3.7.0-rc.2", GitCommit:"4a7c306aa9dcbdeecf79c7517851921a21b72e56", GitTreeState:"clean", GoVersion:"go1.16.7"}

Monsterlin2018 avatar Dec 17 '21 16:12 Monsterlin2018

Is anyone still encountering this issue on the latest version of the provider? I think we fixed this in https://github.com/hashicorp/terraform-provider-helm/pull/727.

Just tried to reproduce this and see the error in provider version v2.0.2 but now I see the appropriate failure diagnostic in v2.6.0.

BBBmau avatar Jun 23 '22 18:06 BBBmau

I can't speak for everyone, but we haven't seen this issue in a while.

jgreat avatar Jun 23 '22 19:06 jgreat

This happens to me as well.

n1vgabay avatar Aug 24 '22 14:08 n1vgabay

Is anyone still encountering this issue on the latest version of the provider? I think we fixed this in #727.

Just tried to reproduce this and see the error in provider version v2.0.2 but now I see the appropriate failure diagnostic in v2.6.0.

Haven't tried it with the v2.6.0 version but will do so and report back, might take me a few days

FischlerA avatar Aug 25 '22 09:08 FischlerA

Reproduced on version 2.6.0 for me

enterdv avatar Sep 12 '22 11:09 enterdv

Reproduced on version 2.6.0 for me

Hello @enterdv! Are you able to include the config that you used that will help us reproduce this issue? We'll want to look into it again if we're still seeing this bug

BBBmau avatar Oct 13 '22 16:10 BBBmau

Reproduced on version 2.6.0 for me

Hello @enterdv! Are you able to include the config that you used that will help us reproduce this issue? We'll want to look into it again if we're still seeing this bug

Hello, I tried with simple helm release

resource "helm_release" "redis" {
  name             = "${var.project}-redis"
  repository      = "https://charts.bitnami.com/bitnami"
  chart             = "redis"
  version          = "17.0.5"
  atomic           = true
  create_namespace = true
  namespace        = "${var.project}-infra"

  values = [
    file("${path.module}/values.yaml")
  ]

  set {
    name  = "fullnameOverride"
    value = "${var.project}-redis"
  }
  set {
    name  = "master.persistence.size"
    value = var.storage_size
  }
  set {
    name  = "master.resources.requests.memory"
    value = var.memory
  }
  set {
    name  = "master.resources.requests.cpu"
    value = var.cpu
  }
  set {
    name  = "master.resources.limits.memory"
    value = var.memory
  }
  set {
    name  = "master.resources.limits.cpu"
    value = var.cpu
  }
  set {
    name  = "replica.persistence.size"
    value = var.storage_size
  }
  set {
    name  = "replica.resources.requests.memory"
    value = var.memory
  }
  set {
    name  = "replica.resources.requests.cpu"
    value = var.cpu
  }
  set {
    name  = "replica.resources.limits.memory"
    value = var.memory
  }
  set {
    name  = "replica.resources.limits.cpu"
    value = var.cpu
  }
  set {
    name  = "replica.replicaCount"
    value = var.replica_count
  }
  set {
    name  = "sentinel.quorum"
    value = var.sentinel_quorum
  }
}

enterdv avatar Oct 19 '22 06:10 enterdv

@enterdv Hello! Thank you for providing the TF config, could you provide the output after running TF_LOG=debug terraform apply

BBBmau avatar Dec 02 '22 16:12 BBBmau

me too . pod status is stay at "pending" when I use helm_release in terraform, but it worked well with Helm cli. Error: release nginx failed, and has been uninstalled due to atomic being set: timed out waiting for the condition

Have you fixed this problem?

ricardorqr avatar May 10 '23 21:05 ricardorqr