terraform-provider-kubernetes icon indicating copy to clipboard operation
terraform-provider-kubernetes copied to clipboard

Error: context deadline exceeded with kubernetes_persistent_volume_claim resource

Open paleti5 opened this issue 3 years ago • 12 comments

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

$ terraform version Terraform v0.13.4

  • provider registry.terraform.io/gavinbunney/kubectl v1.11.2
  • provider registry.terraform.io/hashicorp/azurerm v2.70.0
  • provider registry.terraform.io/hashicorp/helm v2.2.0
  • provider registry.terraform.io/hashicorp/kubernetes v2.3.2
  • provider registry.terraform.io/hashicorp/null v3.1.0
  • provider registry.terraform.io/hashicorp/template v2.2.0

Affected Resource(s)

  • kubernetes_persistent_volume_claim

Terraform Configuration Files

variable "spark_namespace" {
  type        = string
  description = "Namespace to create for spark and associated storage to be hosted in"
  default = "test"
}
# Creates Spark & associated storage namespace
resource "kubernetes_namespace" "sparkns" {
  metadata {
    name = var.spark_namespace
  }
}
resource "kubernetes_persistent_volume" "datalake" {
  metadata {
    name = "pv-data-lake"
  }
  spec {
    capacity = {
      storage = "800Gi"
    }
    access_modes                     = ["ReadWriteMany"]
    persistent_volume_reclaim_policy = "Retain"
    mount_options = [
      "-o allow_other",
      "--file-cache-timeout-in-seconds=120"
    ]
    persistent_volume_source {
      csi {
        driver        = "blob.csi.azure.com"
        read_only     = false
        volume_handle = "unique-volumeid"
        volume_attributes = {
          resource_group  = var.datalake_resource_group_name
          storage_account = var.target_datalake
          container_name  = "fiona"
        }
        node_stage_secret_ref {
          name      = "az-dl-connection"
          namespace = kubernetes_namespace.sparkns.metadata[0].name
        }
      }
    }
  }
}

resource "kubernetes_persistent_volume_claim" "spark_pvc_dl" {
  metadata {
    name = "pvc-data-lake"
    namespace = kubernetes_namespace.sparkns.metadata[0].name
  }
  spec {
    access_modes = ["ReadWriteMany"]
    resources {
      requests = {
        storage = "10Gi"
      }
    }
    volume_name = kubernetes_persistent_volume.datalake.metadata[0].name
  }
}

Debug Output

Panic Output

Expected Behaviour

Actual Behaviour

Steps to Reproduce

  1. terraform apply

Important Factoids

References

  • #0000

paleti5 avatar Aug 04 '21 11:08 paleti5

I have the same issue with trying to delete a namespace using the kubernetes provider and I get a context deadline error as well after 5 mins. hashicorp/kubernetes v2.4.1 Terraform v1.0.5 Need someone to look into this.

module.argocd.kubernetes_namespace.argocd: Still destroying... [id=argocd, 4m50s elapsed] ╷ │ Error: context deadline exceeded

you4su avatar Aug 30 '21 12:08 you4su

Any solution to this issue. I am facing the same.

chandankashyap19 avatar Oct 11 '21 17:10 chandankashyap19

I am facing this issue too

hashicorp/kubernetes v2.5.0 terraform v1.0.8

kubernetes_persistent_volume_claim.sports_pools_volume_claim: Still creating... [4m50s elapsed]
kubernetes_persistent_volume_claim.sports_pools_volume_claim: Still creating... [5m0s elapsed]
╷
│ Error: context deadline exceeded
│ 
│   with kubernetes_persistent_volume_claim.sports_pools_volume_claim,
│   on main.tf line 63, in resource "kubernetes_persistent_volume_claim" "sports_pools_volume_claim":
│   63: resource "kubernetes_persistent_volume_claim" "sports_pools_volume_claim" {
│ 
╵

th0masb avatar Oct 13 '21 14:10 th0masb

@paleti5 is me with a different GitHub account. I solved that with a shell script as I couldn't find solution from terraform.

aleti-pavan avatar Oct 13 '21 14:10 aleti-pavan


apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-data-lake namespace: NAMESPACE spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi volumeName: pv-data-lake storageClassName: ""


#pvc-sed.sh #! /bin/sh

###########

Usage:

########### file=pvc.yaml nameSpace='NAMESPACE' new_nameSpace=$1

hsfile=pvc-hs.yaml hs_nameSpace=$2 echo "Changing the NAMESPACE to $new_nameSpace"

sed -i "s|$nameSpace|$new_nameSpace|g" $file

echo "Catting the $file"

cat pvc.yaml pwd echo "creating pvc" kubectl apply -f pvc.yaml #sleep 100

echo "Changing the NAMESPACE to $hs_nameSpace" sed -i "s|$nameSpace|$hs_nameSpace|g" $hsfile echo "Catting the $hsfile" cat pvc-hs.yaml pwd echo "creating pvc for history server" kubectl apply -f pvc-hs.yaml echo "Shell script finished"


resource "null_resource" "pvc" { provisioner "local-exec" {

command = <<-EOT
az aks get-credentials --resource-group ${var.resource_group_name} --name ${var.project_name}-cl --overwrite-existing
# kubectl apply -f ${path.module}/pvc.yaml
${path.module}/pvc-sed.sh ${kubernetes_namespace.sparkns.metadata[0].name} ${kubernetes_namespace.ns.metadata[0].name}
EOT
interpreter = ["PowerShell", "-Command"]

} depends_on = [ kubernetes_secret.cluster, kubernetes_persistent_volume.datalake ] }

aleti-pavan avatar Oct 13 '21 14:10 aleti-pavan

In my case, Namespace was not deleting and generating an error context deadline exceeded after doing research I figured out that the namespace is stuck in a terminating state. The reason of stuck was apiservice/v1beta1.metrics.k8s.io . After deletion of apiservice/v1beta1.metrics.k8s.io issue got resolved. For those who is facing issues in PVC. Please investigate the reason of stuck in k8s cluster. This is not related to terraform.

chandankashyap19 avatar Oct 20 '21 08:10 chandankashyap19

In my case, Namespace was not deleting and generating an error context deadline exceeded after doing research I figured out that the namespace is stuck in a terminating state. The reason of stuck was apiservice/v1beta1.metrics.k8s.io . After deletion of apiservice/v1beta1.metrics.k8s.io issue got resolved.

For those who is facing issues in PVC. Please investigate the reason of stuck in k8s cluster. This is not related to terraform.

error message can be better so that we could do what's needed. that's the whole point

aleti-pavan avatar Oct 26 '21 22:10 aleti-pavan

Think the issue is that the PVC can be either in the Volume Binding Mode: WaitForFirstConsumer or Volume Binding Mode: Immediate If it is immediate this issue should not occur. If your volume claim is waiting on the first consumer then the status will never set to "Bound" and it will stay forever in "Pending".

For me it worked to add "wait_until_bound = false" in your kubernetes_persistent_volume_claim. I have no other resources that are dependent on the PVC.

torgebauer avatar Nov 02 '21 13:11 torgebauer

@torgebauer I face the same issue, but have no PVC's at all. For me it's related to namespace deletion. It might be that TF is trying to delete the namespace before all the resources in it has been removed.

jrisch avatar Feb 09 '22 14:02 jrisch

SIRISHA-SHARON avatar Jul 13 '22 16:07 SIRISHA-SHARON

same issue here

Terraform v1.4.5

berserkbuddhist avatar Jun 30 '23 06:06 berserkbuddhist