terraform-provider-kops icon indicating copy to clipboard operation
terraform-provider-kops copied to clipboard

Create local YAML from kops_kube_config data resource

Open ddelange opened this issue 2 years ago • 6 comments

Problem

The kops_kube_config data source example will start causing errors after the first apply, ref https://github.com/hashicorp/terraform/issues/27934

Concretely, using dependent providers:

data "kops_kube_config" "kube_config" {
  cluster_name = kops_cluster.cluster.name
  # ensure the cluster has been launched/updated
  depends_on = [kops_cluster_updater.updater]
}

provider "kubectl" {
  host                   = data.kops_kube_config.kube_config.server
  username               = data.kops_kube_config.kube_config.kube_user
  password               = data.kops_kube_config.kube_config.kube_password
  client_certificate     = data.kops_kube_config.kube_config.client_cert
  client_key             = data.kops_kube_config.kube_config.client_key
  cluster_ca_certificate = data.kops_kube_config.kube_config.ca_cert
  load_config_file       = "false"
}

provider "helm" {
  kubernetes {
    host                   = data.kops_kube_config.kube_config.server
    username               = data.kops_kube_config.kube_config.kube_user
    password               = data.kops_kube_config.kube_config.kube_password
    client_certificate     = data.kops_kube_config.kube_config.client_cert
    client_key             = data.kops_kube_config.kube_config.client_key
    cluster_ca_certificate = data.kops_kube_config.kube_config.ca_cert
  }
}

Will cause errors after the first successful apply, so upon the first subsequent apply:

kubectl.kubectl_manifest:

│ Error: failed to create kubernetes rest client for read of resource: Get "http://localhost/api?timeout=32s": dial tcp [::1]:80: connect: connection refused

helm.helm_release:

│ Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

Workaround

Manually delete these entries from tfstate after each apply

Suggestion

It would be cool to be able to have a yaml_body attribute in the data source, so that we can:

  • Create a local kops_kube_config.yaml file from the data source
  • Provide the dependent providers with that file instead of the dynamic values, like here

ddelange avatar Dec 15 '21 16:12 ddelange

Hello, what version are you using ?

eddycharly avatar Dec 15 '21 18:12 eddycharly

Hi! That was quick :) Not on the latest now that you say so. How is it relevant? Just curious

$ terraform version
Terraform v1.0.9
on darwin_amd64
+ provider registry.terraform.io/eddycharly/kops v1.21.2-alpha.2
+ provider registry.terraform.io/gavinbunney/kubectl v1.13.1
+ provider registry.terraform.io/hashicorp/aws v3.58.0
+ provider registry.terraform.io/hashicorp/helm v2.3.0
+ provider registry.terraform.io/hashicorp/http v2.1.0
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/tls v3.1.0
+ provider registry.terraform.io/invidian/sshcommand v0.2.2
+ provider registry.terraform.io/rancher/rancher2 v1.21.0

ddelange avatar Dec 15 '21 20:12 ddelange

You're right, i misunderstood your issue, sorry.

Not sure how to implement a yaml_body attribute, i'll dig it asap but i find it crazy that terraform partially supports this, it looks very error prone.

eddycharly avatar Dec 15 '21 22:12 eddycharly

I wonder how terraform can build a plan when the cluster does not exist yet ?

eddycharly avatar Dec 15 '21 23:12 eddycharly

Yes, was also big surprise to me that there is no depends_on for providers (ref https://github.com/hashicorp/terraform/issues/2430)

Regarding the plan when there is no cluster yet: it's probably the same when there is already a cluster, namely initiate the providers with nulls. For planning that is apparently sufficient, but how terraform manages to fill it in just-in-time (re-initiate the providers?) such that the initial apply actually works, is a mystery to me.

Probably the subsequent applies would also work if it weren't for the refresh. Already tried finding some way to defer the refresh of relevant resources, but no luck

ddelange avatar Dec 16 '21 06:12 ddelange

My current hack to write the kubeconfig to a local file and use it downstream (will need to have the kops executable available):

resource "kops_cluster_updater" "updater" {
  ...

  provisioner "local-exec" {
    command = "kops export kubeconfig '${self.cluster_name}' --state 's3://${local.state_bucket_name}' --admin --kubeconfig ${local.kops_kube_config_filename}"
  }
}

provider "kubectl" {
  config_path = local.kops_kube_config_filename
}

provider "helm" {
  kubernetes {
    config_path = local.kops_kube_config_filename
  }
}

ref https://kops.sigs.k8s.io/cli/kops_export_kubeconfig/

EDIT: for the helm provider this works (apparently it reads the path lazily when applying a helm_release), but it looks like kubectl reads the config upon provider init, so this workaround won't work for kubectl, ref https://github.com/hashicorp/terraform/issues/2430#issuecomment-150634097.

ddelange avatar Dec 21 '21 08:12 ddelange