terraform-provider-kubernetes icon indicating copy to clipboard operation
terraform-provider-kubernetes copied to clipboard

When kubernetes_manifest is used, kubernetes provider config is invalid

Open cuttingedge1109 opened this issue 2 years ago • 8 comments

Terraform Version, Provider Version and Kubernetes Version

Terraform version: 0.15.4 (I use terraform cloud)
Kubernetes provider version: 2.5.0 (Same result for 2.4.0 and 2.3.2)
Kubernetes version: 1.20.2

Affected Resource(s)

All resources created by kubernetes_manifest

Terraform Configuration Files

terraform {
  backend "remote" {
    organization = "xxx"

    workspaces {
      name = "kubernetes-test"
    }
  }

    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "2.5.0"
    }
  }
}

provider "kubernetes" {
  experiments {
    manifest_resource = true
  }
  host                   = var.KUBE_HOST
  cluster_ca_certificate = base64decode(var.kube_cluster_ca_cert_data)
  client_key             = base64decode(var.kube_client_key_data)
  client_certificate     = base64decode(var.kube_client_cert_data)
}

resource "kubernetes_manifest" "test" {
  manifest = {
    "apiVersion" = "monitoring.coreos.com/v1"
    "kind"       = "PodMonitor"

    "metadata" = {
      "name"      = "test"
      "namespace" = "monitoring"
    }
    "podMetricsEndpoints" = [
      {
        "interval" = "60s"
        "path"     = "metrics/"
        "port"     = "metrics"
      }
    ]
    "selector" = {
      "matchLabels" = {
        "app.kubernetes.io/component" = "test"
        "app.kubernetes.io/name"      = "test"
      }
    }
  }
}

Debug Output

Panic Output

Steps to Reproduce

terraform plan

Expected Behavior

Plane the PodMonitor without error

Actual Behavior

 Error: Failed to construct REST client
with kubernetes_manifest.test
cannot create REST client: no client config

Invalid attribute in provider configuration
with provider["registry.terraform.io/hashicorp/kubernetes"] in provider "kubernetes":
provider "kubernetes" {
'client_certificate' is not a valid PEM encoded certificate

Invalid attribute in provider configuration
with provider["registry.terraform.io/hashicorp/kubernetes"] in provider "kubernetes":
provider "kubernetes" {
'cluster_ca_certificate' is not a valid PEM encoded certificate

Invalid attribute in provider configuration
with provider["registry.terraform.io/hashicorp/kubernetes"] in provider "kubernetes":
provider "kubernetes" {
'client_key' is not a valid PEM encoded certificate

Important Factoids

If I remove kubernetes_manifest resource, it works.

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

cuttingedge1109 avatar Oct 12 '21 13:10 cuttingedge1109

@cuttingedge1109 Can you plese share a bit more detail about how those variables used to in provider configuration are set themselves? Also, please include the variable declarations. Is this part of a module? If so, please share the module invocation block too.

End goal here is to determine how the values for those variables are produced.

alexsomesan avatar Oct 18 '21 09:10 alexsomesan

I stumbled upon this issue while looking if there was an issue similar to https://github.com/hashicorp/terraform-provider-kubernetes-alpha/issues/217 in this repository, now that the kubernetes_manifest resource has been merged into the main provider.

I'm encountering the same error when using the output of a google_container_cluster as configuration for a kubernetes provider. If the GKE cluster is not already up and running, defining/refreshing the state (and any other action) of the kubernetes_manifest will fail. To be clear, this is not the case with any other resources of the kubernetes provider.

I can't be sure that's exactly the same usage as the OP, but it seemed similar to this issue. I'm using version 2.8.0 of the provider with terraform 1.0.3.

EDIT: Maybe my issue is actually closer to https://github.com/hashicorp/terraform-provider-kubernetes/issues/1391.

flovouin avatar Mar 07 '22 16:03 flovouin

Did you guys manage to get it working?

Lincon-Freitas avatar Apr 21 '22 11:04 Lincon-Freitas

same here … the workaround we are currently working with is to use the kubectl_manifest resource from the gavinbunney/kubectl provider. Needs a bit of rewrite though … using the yamlencode function this would look something like this:

resource "kubectl_manifest" "keycloak_db" {
  yaml_body = yamlencode({
    apiVersion = "myapi/v1"
    kind       = "myservice"
    metadata = {
      labels = {
        team = terraform.workspace
      }
      name      = "${terraform.workspace}-myapp"
      namespace = var.namespace
    }
    spec = {
      […]
    }
      resources = {
        limits = {
          cpu    = "500m"
          memory = "500Mi"
        }
        requests = {
          cpu    = "100m"
          memory = "100Mi"
        }
      }
      teamId = terraform.workspace
      volume = {
        size = "10Gi"
      }
    }
  })
}

Dniwdeus avatar Apr 27 '22 15:04 Dniwdeus

Same problem here

loeffel-io avatar May 17 '22 18:05 loeffel-io

same issue

litan1106 avatar Jul 08 '22 14:07 litan1106

same issue, but I have a running eks cluster that it is failing against. Is this a known bug?

drornir avatar Jul 13 '22 22:07 drornir

same issue as well... if I already have an EKS cluster created, it works! but if I'm going to create from scratch, it doesn't work!

same here … the workaround we are currently working with is to use the kubectl_manifest resource from the gavinbunney/kubectl provider. Needs a bit of rewrite though … using the yamlencode function this would look something like this:

resource "kubectl_manifest" "keycloak_db" {
  yaml_body = yamlencode({
    apiVersion = "myapi/v1"
    kind       = "myservice"
    metadata = {
      labels = {
        team = terraform.workspace
      }
      name      = "${terraform.workspace}-myapp"
      namespace = var.namespace
    }
    spec = {
      […]
    }
      resources = {
        limits = {
          cpu    = "500m"
          memory = "500Mi"
        }
        requests = {
          cpu    = "100m"
          memory = "100Mi"
        }
      }
      teamId = terraform.workspace
      volume = {
        size = "10Gi"
      }
    }
  })
}

thanks @Dniwdeus , it worked for me! but as you said, it's a workaround until someone finds a solution.

wsalles avatar Jul 14 '22 21:07 wsalles

@alexsomesan why was this closed (as completed)? Version 2.13.1 still has the issue and it doesn't look like the changes on the main branch since then contain a fix for that either: https://github.com/hashicorp/terraform-provider-kubernetes/compare/v2.13.1...48d1f35.

jeroenj avatar Sep 13 '22 14:09 jeroenj

Hey guys we stumbled into the same kind of issue with Terraform and Kubernetes for the 4th time. I think the realisation that we came to, is that Terraform is not suitable for systems that rely on eventual consistency. So the approach that we are taking is to use a tool that can be configured with Terraform and then tool deals with the Kubernetes eventual consistency. Even if you manage to cobble together a working deployment using sleeps and other tricks, as soon as it comes to decommissioning the resources then you are in real trouble.

So what we have done is leveraged the approach of https://github.com/aws-ia/terraform-aws-eks-blueprints. This is basically terraform configuring ArgoCD and then ArgoCD configuring Kubernetes resources.

In our case we wrap the resources up in a helm chart and call them using the ArgoCD Application resource. We then leverage an App-Of-Apps helm chart to orchestrate all the ArgoCD Application resources and call this helm chart from Terraform.

taliesins avatar Sep 14 '22 08:09 taliesins

hi all - We closed this issue as this is something that has become a catch-all for a multitude of different related issues. We ask that if you run into further related problems, please open a new issue outlining the specifics and we will review them individually.

iBrandyJackson avatar Sep 21 '22 17:09 iBrandyJackson

For this case I'd say all related issues mentioned explain the same root issue (the kubernetes_manifest resource not respecting the existence of the cluster it gets created in, unlike other kubernetes_resources).

That said, this issue was a duplicate of https://github.com/hashicorp/terraform-provider-kubernetes/issues/1391 anyway.

jeroenj avatar Sep 21 '22 21:09 jeroenj

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

github-actions[bot] avatar Oct 22 '22 02:10 github-actions[bot]