terraform-provider-kubernetes icon indicating copy to clipboard operation
terraform-provider-kubernetes copied to clipboard

Unable to import GKE BackendConfig resource

Open christopherdbull opened this issue 3 years ago • 7 comments

Terraform version, Kubernetes provider version and Kubernetes version

Terraform version: 1.1.3
Kubernetes Provider version:2.7.1
Kubernetes version:1.20

Terraform configuration

resource "kubernetes_manifest" "ex_backend_config" {
  manifest = {
    "apiVersion" = "cloud.google.com/v1"
    "kind"       = "BackendConfig"
    "metadata" = {
      "name"      = "ex-backend-config"
      "namespace" = "default"
    }
    "spec" = {
      "cdn" = {
        "enabled" = true
      }
      "connectionDraining" = {
        "drainingTimeoutSec" = 60
      }
      "securityPolicy" = {
        "name" = "main-ingress-policy"
      }
      "timeoutSec" = 1800
    }
  }
}

Question

I'm trying to import a GKE BackendConfig resource to my TF state, but it keeps giving me this error: Failed to get namespacing requirement from RESTMapper which looks like it's related to this https://github.com/hashicorp/terraform-provider-kubernetes/blob/09fa4ea1d903b96c5c90c88e7ebf1ecf85cc4117/manifest/provider/import.go#L71 but I'm really puzzled about the context and reason for the error. Any help much appreciated, this is the command I'm running: terraform import module.kubernetes-config.kubernetes_manifest.ex_backend_config "apiVersion=cloud.google.com/v1,kind=BackendConfig,namespace=default,name=ex-backend-config"

christopherdbull avatar Jan 19 '22 18:01 christopherdbull

Hi! I'll try to reproduce this shortly. Is the resource cloud.google.com/v1 BackendConfig present by default on GKE cluster or do I need to install something?

alexsomesan avatar Feb 09 '22 15:02 alexsomesan

I am getting something similar for another resource:

[...].kubernetes_manifest.crd_podgroups: Importing from ID "apiVersion=apiextensions.k8s.io/v1,kind=CustomResourceDefinition,name=podg
roups.scheduling.incubator.k8s.io"...                                                                                                                        
╷                                                                                                                                                            
│ Error: Failed to get namespacing requirement from RESTMapper                                                                                               
│                                                                                                                                                            
│ Unauthorized                                                                                                                                               
╵                  

(the user has admin privileges and has imported plenty of other resources before) For what it's worth, I get the same error even when I strip the trailing .io, so the problem seems to occur very early on...

therc avatar Feb 09 '22 18:02 therc

We were also having this issue with import. (this on EKS) We solved this by temporarly changing the kubernetes provider. In our pipeline we use the following code to deploy manifests with the kubernetes provider which is working fine.

provider "kubernetes" {
  host                   = element(concat(data.aws_eks_cluster.cluster[*].endpoint, tolist([""])), 0)
  cluster_ca_certificate = base64decode(element(concat(data.aws_eks_cluster.cluster[*].certificate_authority.0.data, tolist([""])), 0))
  token                  = element(concat(data.aws_eks_cluster_auth.cluster[*].token, tolist([""])), 0)
}

But for import we had to change the provider code to a local config file with master rights

provider "kubernetes" {
  host                   = element(concat(data.aws_eks_cluster.cluster[*].endpoint, tolist([""])), 0)
  config_path = "~/.kube/config"
  config_context = "<context-name>"
}

Hope this helps anyone with the same issue.

iamnicolasvdb avatar Feb 15 '22 19:02 iamnicolasvdb

We were also having this issue with import. (this on EKS) We solved this by temporarly changing the kubernetes provider. In our pipeline we use the following code to deploy manifests with the kubernetes provider which is working fine.

provider "kubernetes" {
  host                   = element(concat(data.aws_eks_cluster.cluster[*].endpoint, tolist([""])), 0)
  cluster_ca_certificate = base64decode(element(concat(data.aws_eks_cluster.cluster[*].certificate_authority.0.data, tolist([""])), 0))
  token                  = element(concat(data.aws_eks_cluster_auth.cluster[*].token, tolist([""])), 0)
}

But for import we had to change the provider code to a local config file with master rights

provider "kubernetes" {
  host                   = element(concat(data.aws_eks_cluster.cluster[*].endpoint, tolist([""])), 0)
  config_path = "~/.kube/config"
  config_context = "<context-name>"
}

Hope this helps anyone with the same issue.

I had the same issue, this solved it for me as well. On GKE, Terraform v1.0.11.

luckyswede avatar Apr 08 '22 06:04 luckyswede

I am using an older version of Terraform for now, but both Google and Kubernetes modules are the latest, and I am also encountering this issue, we mainly use terraform cloud, and apply works.

Thanks @iamnicolasvdb your workaround did the job for me, but it is still an issue that I need to ignore the main.tf file change locally every time I need to run a kubernetes interaction and then commit changes...

The issue happened every time I tried to destroy or import, but after running an apply (state update...) the actions succeed for up to an hour since the state has changed.

So I took another look, and I believe that the issue is with using the state's cluster access token, like in the example:

data "google_client_config" "default" {}
data "google_container_cluster" "my_cluster" {
  name = "my-cluster"
  zone = "us-east1-a"
}

provider "kubernetes" {
  host                   = "https://${data.google_container_cluster.my_cluster.endpoint}"
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = base64decode(data.google_container_cluster.my_cluster.master_auth[0].cluster_ca_certificate)
}

For some reason, when running destroy or import, the cluster's access_token is not being refreshed, so running an apply before, even an empty one, helps updating the token, maybe refresh state will help (edit: verified that refresh state pulls a fresh token), so I assume the fix will be to trigger refreshing the token if it has expired for every invocation requiring API calls, not just on apply.

uda avatar Jun 30 '22 22:06 uda

I also came across this issue on the kubernetes provider: https://github.com/hashicorp/terraform-provider-google/issues/11474

BackendConfig is a mandatory requirement for GKE with IAP. It seems a cludge not to have BackendConfig and FrontendConfig as first class resources either in either the Kubernetes or Google providers. Apparently to set up IAP Ingress you must use the YAML method via kubernetes. See: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features

From that document:

You cannot manually configure LoadBalancer features using the Google Cloud SDK or Google Cloud console. You must use BackendConfig or FrontendConfig Kubernetes resources.

My concern is that without a gcloud or kubectl resource, cross-resource referencing becomes tricky. I also wonder about state control issues, etc.

independentid avatar Jul 18 '22 21:07 independentid

Ran into the same issues when importing an existing ingress into a kubernetes_ingress_v1 resource. The @iamnicolasvdb workaround worked.

terraform version
Terraform v1.2.5
on darwin_amd64
+ provider registry.terraform.io/hashicorp/google v4.29.0
+ provider registry.terraform.io/hashicorp/google-beta v4.29.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.12.1

shpml avatar Jul 19 '22 23:07 shpml

Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!

github-actions[bot] avatar Jul 20 '23 00:07 github-actions[bot]