terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
Kubernetes provider does not respect data when kubernetes_manifest is used
Terraform Version, Provider Version and Kubernetes Version
Terraform version: v1.0.5
Kubernetes provider version: v2.4.1
Kubernetes version: 1.20.8-gke.900
Affected Resource(s)
- kubernetes_manifest
Terraform Configuration Files
data "google_client_config" "this" {}
data "google_container_cluster" "this" {
name = "my-cluster"
location = "europe-west2"
project = "my-project"
}
provider "kubernetes" {
token = data.google_client_config.this.access_token
host = data.google_container_cluster.this.endpoint
cluster_ca_certificate = base64decode(data.google_container_cluster.this.master_auth.0.cluster_ca_certificate)
experiments {
manifest_resource = true
}
}
resource "kubernetes_manifest" "test-crd" {
manifest = {
apiVersion = "apiextensions.k8s.io/v1"
kind = "CustomResourceDefinition"
metadata = {
name = "testcrds.hashicorp.com"
}
spec = {
group = "hashicorp.com"
names = {
kind = "TestCrd"
plural = "testcrds"
}
scope = "Namespaced"
versions = [{
name = "v1"
served = true
storage = true
schema = {
openAPIV3Schema = {
type = "object"
properties = {
data = {
type = "string"
}
refs = {
type = "number"
}
}
}
}
}]
}
}
}
Debug Output
Debug log contains lots of private information. I'd prefer to not to post it.
Steps to Reproduce
terraform apply
Expected Behavior
Plan is presented, after apply CRD is created successfully
Actual Behavior
Error:
Invalid attribute in provider configuration
with provider["registry.terraform.io/hashicorp/kubernetes"],
on main.tf line 9, in provider "kubernetes":
9: provider "kubernetes" {
'host' is not a valid URL
╷
│ Error: Failed to construct REST client
│
│ with kubernetes_manifest.test-crd,
│ on main.tf line 19, in resource "kubernetes_manifest" "test-crd":
│ 19: resource "kubernetes_manifest" "test-crd" {
│
│ cannot create REST client: no client config
Important Factoids
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Hi. Same issue
It doesn't work with depends_on either.
started running into the following error which I think is related on destroy, didn't work with tostring() either:
│ Error: Provider configuration: failed to assert type of element in 'args' value
│
│ with module.services_tools.provider["registry.terraform.io/hashicorp/kubernetes"],
│ on ../../modules/services_tools/versions.tf line 23, in provider "kubernetes":
│ 23: provider "kubernetes" {
// this is required in order to pass information to the underlying kube provider for the above eks see https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1280
provider "kubernetes" {
experiments {
manifest_resource = true
}
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
args = ["eks", "get-token", "--cluster-name", data.aws_eks_cluster.cluster.name]
command = "aws"
}
}
Same error when using GCP and applying multiple manifests from the same file -- │ Error: Failed to construct REST client:
- Terraform 1.0.8
- kubernetes provider 2.5.0
data "google_client_config" "current" {}
data "google_container_cluster" "cluster" {
name = var.cluster_name
location = var.cluster_location
}
provider "kubernetes" {
host = data.google_container_cluster.cluster.endpoint
client_certificate = base64decode(data.google_container_cluster.cluster.master_auth.0.client_certificate)
client_key = base64decode(data.google_container_cluster.cluster.master_auth.0.client_key)
cluster_ca_certificate = base64decode(data.google_container_cluster.cluster.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.current.access_token
experiments {
manifest_resource = true
}
}
resource "kubernetes_manifest" "default" {
# Create a map { "kind--name" => yaml_doc } from the multi-document yaml text.
# Each element is a separate kubernetes resource.
# Must use \n---\n to avoid splitting on strings and comments containing "---".
# YAML allows "---" to be the first and last line of a file, so make sure
# raw yaml begins and ends with a newline.
# The "---" can be followed by spaces, so need to remove those too.
# Skip blocks that are empty or comments-only in case yaml began with a comment before "---".
for_each = {
for value in [
for yaml in split(
"\n---\n",
"\n${replace(file("manifests.yaml"), "/(?m)^---[[:blank:]]+$/", "---")}\n"
) :
yamldecode(yaml)
if trimspace(replace(yaml, "/(?m)(^[[:blank:]]*(#.*)?$)+/", "")) != ""
] : "${value["kind"]}--${value["metadata"]["name"]}" => value
}
manifest = each.value
}
When using kubernetes provider v2.6.1 and terraform v1.x.x, the error shown is the following:
Invalid attribute in provider configuration
with provider["registry.terraform.io/hashicorp/kubernetes"],
on provider.tf line 24, in provider "kubernetes":
24: provider "kubernetes" {
'host' is not a valid URL
The error:
'host' is not a valid URL
is likely because:
host = data.google_container_cluster.this.endpoint
should have been (as per #1468):
host = "https://${data.google_container_cluster.this.endpoint}"
but:
cannot create REST client: no client config
is happening for me despite host being a URL, and I'm not sure where to look next to diagnose.
Edit:
Seen in logs (TF_LOG=TRACE terraform apply):
2021-11-01T17:16:22.257+1100 [DEBUG] provider.terraform-provider-kubernetes_v2.6.1_x5: 2021-11-01T17:16:22.256+1100 [ERROR] [Configure]: Failed to load config:="&{0xc001212820 0xc0007e6fc0 <nil> 0xc000176c00 {0 0} 0xc001211f30}"
so it looks like this code path is being taken. I noted the comment:
// this is a terrible fix for if the configuration is a calculated value
so perhaps clientConfig is expected to be populated elsewhere, later on...
This may have been evident from the issue title, but those looking for a workaround can remove dynamic/data values from the provider configuration.
E.g., given a suitably configured kubectl environment, replacing:
provider "kubernetes" {
host = "https://${data.google_container_cluster.default.endpoint}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(data.google_container_cluster.default.master_auth.0.cluster_ca_certificate)
}
with:
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "gke_my-project_my-region_my-cluster"
}
Getting Failed to construct REST client when I try to deploy argocd app on non-existent EKS cluster.
But it works fine on running EKS cluster.
│ Error: Failed to construct REST client
│
│ with module.argocd_application_gitops.kubernetes_manifest.argo_application,
│ on .terraform/modules/argocd_application_gitops/main.tf line 1, in resource "kubernetes_manifest" "argo_application":
│ 1: resource "kubernetes_manifest" "argo_application" {
data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.cluster.token
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
}
}
module "eks" {
...
}
module "argocd_application_gitops" {
depends_on = [module.vpc, module.eks, module.eks_services]
source = "project-octal/argocd-application/kubernetes"
version = "2.0.0"
argocd_namespace = var.argocd_k8s_namespace
destination_server = "https://kubernetes.default.svc"
project = var.argocd_project_name
name = "gitops"
namespace = "myns"
repo_url = var.argocd_root_gitops_url
path = "Chart"
chart = ""
target_revision = "master"
automated_self_heal = true
automated_prune = true
}
Apparently, the helm provider (when configured in the same way) does not have this issue. So I can have the helm resources described in TF when the cluster does not exist. But I can't have the k8s manifest TF code in the project until the cluster is created.
It would be great to see the issue with Failed to construct REST client for the Kubernetes provider solved soon! 🤞
Same problem with cert-manager:
Error: Failed to construct REST client │ │ with module.eks_cluster_first.module.cert_manager.kubernetes_manifest.cluster_issuer_selfsigned, │ on modules\cert_manager\cert_manager.tf line 89, in resource "kubernetes_manifest" "cluster_issuer_selfsigned": │ 89: resource "kubernetes_manifest" "cluster_issuer_selfsigned" { │ │ cannot create REST client: no client config
Same issue here. Serious blocker for us. :(
Still seeing this on provider version 2.10.0
I ended up moving my kubernetes_manifest resources to another Terraform project invoked after the cluster is created but definitely not ideal.
how is this still an issue? Still affected.
The problem is actual, a big request to fix it.
Still an issue, please fix this
+1
Same here.
+1 this is significant problem
+1 - Even occurs if I try and run a plan using -target to try to deploy the cluster first
Still an issue with TF Plan when cluster is not yet present!
same here
+1
I have this issue as well
Same here, 1.5 year and counting.
Also running into this issue, since I have a custom resource I want to use the kubernetes_manifest resource, however according to the documentation:
This resource requires API access during planning time. This means the cluster has to be accessible at plan time and thus cannot be created in the same apply operation.
+1
Same issue here :
Error: Failed to construct REST client
and
cannot create REST client: no client config
Same...
Failed to construct REST client
cannot create REST client: no client config
Still an issue! cannot create AWS infra and all related in new empty account because EKS cluster does not yet exists, even though I have dependencies. Thats silly!