terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
kubernetes_manifests REST client Error
Terraform Version, Provider Version and Kubernetes Version
Terraform version: v1.2.6
Kubernetes provider version: v2.16.1
Kubernetes version: v1.23.9-eks-ba74326
Affected Resource(s)
- kubernetes_manifests
Terraform Configuration Files
# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
provider "kubernetes" {
alias = "k8s-production"
host = aws_eks_cluster.production.endpoint
cluster_ca_certificate = base64decode(aws_eks_cluster.production.certificate_authority[0].data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = ["eks", "get-token", "--cluster-name", aws_eks_cluster.production.name]
command = "aws"
}
}
...
resource "kubernetes_manifest" "externalsecret-grafana" {
manifest = yamldecode(<<-EOF
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
annotations:
terraform: true
name: grafana
namespace: mgmt
spec:
data:
...
EOF
)
}
Debug Output
Panic Output
Steps to Reproduce
-
terraform import kubernetes_manifest.externalsecret-grafana "apiVersion=external-secrets.io/v1beta1,kind=ExternalSecret,namespace=mgmt,name=grafana"
- or just
terraform plan
Expected Behavior
What should have happened?
- kubernetes_manifest resource be imported (or be added to the state)
Actual Behavior
What actually happened?
- terraform plan
╷
│ Error: Failed to construct REST client
│
│ with kubernetes_manifest.externalsecret-grafana,
│ on kubernetes.tf line 114, in resource "kubernetes_manifest" "externalsecret-grafana":
│ 114: resource "kubernetes_manifest" "externalsecret-grafana" {
│
│ cannot create REST client: no client config
╵
- terraform import
╷
│ Error: Failed to get RESTMapper client
│
│ cannot create discovery client: no client config
╵
Important Factoids
References
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
I noticed the kubernetes_manifest
resource you quoted in the description doesn't include provider
attribute to point to the alias of the kubernetes provider block you also quoted. Can you please confirm if you indeed have such a reference in your configuration? Without it, the resource will try to use the "default" provider block instead of the alias one, and that one is obviously not configured.
I get the same error with my defualt kubernetes provider configured as follows:
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.cluster.token
}
and the manifest definition looks like:
resource "kubernetes_manifest" "my_service" {
manifest = {
"apiVersion" = "elbv2.k8s.aws/v1beta1"
"kind" = "TargetGroupBinding"
"metadata" = {
"name" = aws_alb_target_group.service.name
"namespace" = "default"
}
"spec" = {
"serviceRef" = {
"name" = "my_service"
port = 80
}
"targetGroupARN" = aws_alb_target_group.service.arn
"targetType" = "ip"
}
}
}
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!
Config & version
provider "aws" {
region = var.region
default_tags {
tags = {
Environment = local.var.environment
Project = local.var.project
}
}
}
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.11.0"
}
}
}
data "aws_eks_cluster_auth" "eks_cluster" {
name = module.eks.cluster_name
}
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.eks_cluster.token
}
provider "helm" {
kubernetes {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.eks_cluster.token
}
}
Ingress
resource "kubernetes_manifest" "ingress" {
for_each = { for env in local.var.envs : "api.${env.domain}" => env }
manifest = yamldecode(templatefile("values/${each.value.name}-backend-api-ingress.yaml", { name = "${each.value.name}-${each.value.type}", alb_group_name = "${local.name_prefix}-alb", certificate_arn = aws_acm_certificate.acm[each.value.domain].arn, domain = each.value.domain }))
}
error
Error: Terraform exited with code 1.
╷
│ Error: Failed to construct REST client
│
│ with kubernetes_manifest.ingress["api.dev.domain.com"],
│ on eks-ingress.tf line 1, in resource "kubernetes_manifest" "ingress":
│ 1: resource "kubernetes_manifest" "ingress" {
│
│ cannot create REST client: no client config
How i can fix this, help me.
Same issue here trying to create a virtual service:
resource "kubernetes_manifest" "api_vs" {
manifest = {
apiVersion = "networking.istio.io/v1alpha3"
kind = "VirtualService"
metadata = {
name = "api_vs"
namespace = var.istio_ns
}
spec = {
hosts = ["*"] # Cambia esto por el dominio o hostname que deseas usar para acceder a Kiali
gateways = ["istio-ingressgateway"]
http = [{
route = [{
destination = {
host = "api-gateway-service.${var.istio_ns}.svc.cluster.local"
port = {
number = 8580 # Puerto donde se sirve el dashboard de Kiali
}
}
}]
}]
}
}
}
This is my provider config
data "google_client_config" "current" {}
provider "kubernetes" {
host = "https://${module.gcp_gke.gke_cluster_endpoint}"
client_certificate = base64decode(module.gcp_gke.gke_client_certificate)
client_key = base64decode(module.gcp_gke.gke_client_key)
cluster_ca_certificate = base64decode(module.gcp_gke.gke_cluster_ca_certificate)
token = data.google_client_config.current.access_token
}
Error: cannot create REST client: no client config