terraform-provider-kubectl
terraform-provider-kubectl copied to clipboard
Odd behavior errors in the apply [configmap/v1] [v1/Secret] [storage.k8s.io/v1/StorageClass]
I have noticed some strange behavior with the way in which the provider is handling the API interactions with the k8s cluster it seems to think that the above is somehow invalid but this is incorrect as the above are still valid
code is
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
data:
ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
ssl-protocols: "TLSv1.2 TLSv1.3"
ssl_prefer_server_ciphers: "false"
data "kubectl_path_documents" "docs" {
pattern = "${path.module}/manifests/*.yaml"
}
resource "kubectl_manifest" "test" {
for_each = toset(data.kubectl_path_documents.docs.documents)
yaml_body = each.value
}
Plan is
# module.NginxIngressControllerHelmChart.kubectl_manifest.test["kind: ConfigMap\r\napiVersion: v1\r\nmetadata:\r\n name: nginx-config\r\ndata:\r\n ssl-ciphers: \"ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384\"\r\n ssl-protocols: \"TLSv1.2 TLSv1.3\"\r\n ssl_prefer_server_ciphers: \"false\"\r\n\r\n#https://ssl-config.mozilla.org/#server=nginx&version=1.17.7&config=intermediate&openssl=1.1.1d&hsts=false&guideline=5.6"] will be created
+ resource "kubectl_manifest" "test" {
+ api_version = "v1"
+ force_new = false
+ id = (known after apply)
+ kind = "ConfigMap"
+ live_manifest_incluster = (sensitive value)
+ live_uid = (known after apply)
+ name = "nginx-config"
+ namespace = (known after apply)
+ server_side_apply = false
+ uid = (known after apply)
+ validate_schema = true
+ wait_for_rollout = true
+ yaml_body = (sensitive value)
+ yaml_body_parsed = <<-EOT
apiVersion: v1
data:
ssl-ciphers: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-protocols: TLSv1.2 TLSv1.3
ssl_prefer_server_ciphers: "false"
kind: ConfigMap
metadata:
name: nginx-config
EOT
+ yaml_incluster = (sensitive value)
}
The above should apply the rendered config map to the default namespace instead the following error being produced?
│ Error: nginx-config failed to create kubernetes rest client for update of resource: resource [v1/ConfigMap] isn't valid for cluster, check the APIVersion and Kind fields are valid
│
│ with module.NginxIngressControllerHelmChart.kubectl_manifest.test["kind: ConfigMap\r\napiVersion: v1\r\nmetadata:\r\n name: nginx-config\r\ndata:\r\n ssl-ciphers: \"ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384\"\r\n ssl-protocols: \"TLSv1.2 TLSv1.3\"\r\n ssl_prefer_server_ciphers: \"false\"\r\n\r\n#https://ssl-config.mozilla.org/#server=nginx&version=1.17.7&config=intermediate&openssl=1.1.1d&hsts=false&guideline=5.6"],
│ on nginx_helm_module\main.tf line 69, in resource "kubectl_manifest" "test":
│ 69: resource "kubectl_manifest" "test" {
The kubernetes cluster is using version v1.22.4 similar outputs are being produced for the other types (secrets, storageclass etc) I was also wondering how does this provider deal with CRD's for example with certmanager which requires a ClusterIssuer Kind? Any suggestions as to how to overcome this is greatly appreciated
can also confirm error exists for CRDs such as
Error: cert-manager/letsencrypt-prod-origin failed to create kubernetes rest client for update of resource: resource [cert-manager.io/v1/ClusterIssuer] isn't valid for cluster, check the APIVersion and Kind fields are valid
The above is incorrect if cert-manager pods are present via helm chart This leads me to believe that the provider needs to be updated to take into account the Kubernetes cluster version to have access to the default set of features offered by Kubernetes
Interestingly when using the Kubernetes resource via the hashicorp provider the same config works?
resource "kubernetes_config_map_v1" "example" {
metadata {
name = "nginx-config"
}
data = {
ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
ssl-protocols: "TLSv1.2 TLSv1.3"
ssl_prefer_server_ciphers: "false"
}
}
Ok so I found an interesting article which uses this particular provider so it turns out that you can use terraform functions to figure out how to convert the yaml file into HCL code (pretty cool right) so if we take an example
functions/cows/cows.yaml
apiVersion: openfaas.com/v1
kind: Function
metadata:
name: showcow
namespace: openfaas-fn
spec:
name: showcow
handler: node show_cow.js
image: alexellis2/ascii-cows-openfaas:0.1
we can use a couple of functions to allow us to convert our code from YAML to terraform HCL the 1st being yamldecode and the 2nd being file
echo 'yamldecode(file("cows.yaml"))' | terraform console
once we run the above through the terraform console we get
{
"apiVersion" = "openfaas.com/v1"
"kind" = "Function"
"metadata" = {
"name" = "showcow"
"namespace" = "openfaas-fn"
}
"spec" = {
"handler" = "node show_cow.js"
"image" = "alexellis2/ascii-cows-openfaas:0.1"
"name" = "showcow"
}
}
notice the conversion
now we can use the above with the provider to create the code we need i.e
functions/cows/main.tf
resource "kubernetes_manifest" "openfaas_fn_showcow" {
manifest = {
"apiVersion" = "openfaas.com/v1"
"kind" = "Function"
"metadata" = {
"name" = "showcow"
"namespace" = "openfaas-fn"
}
"spec" = {
"handler" = "node show_cow.js"
"image" = "alexellis2/ascii-cows-openfaas:0.1"
"name" = "showcow"
}
}
}
The above should bow be the code that we apply Hope this helps anyone who is being perplexed with the EOF etc errors
source: https://learn.hashicorp.com/tutorials/terraform/kubernetes-crd-faas
Note after experimenting with the method above make sure to have the following within the same tf file i.e
file name: functions/cows/main.tf
provider "kubernetes" {
host = var.k8s_host
client_certificate = base64decode(var.k8s_client_certificate)
client_key = base64decode(var.k8s_client_key)
cluster_ca_certificate = base64decode(var.k8s_cluster_ca_certificate)
}
resource "kubernetes_manifest" "openfaas_fn_showcow" {
manifest = {
"apiVersion" = "openfaas.com/v1"
"kind" = "Function"
"metadata" = {
"name" = "showcow"
"namespace" = "openfaas-fn"
}
"spec" = {
"handler" = "node show_cow.js"
"image" = "alexellis2/ascii-cows-openfaas:0.1"
"name" = "showcow"
}
}
}
Another bit of research which is quite interesting https://github.com/hashicorp/terraform-provider-kubernetes-alpha/issues/199#issuecomment-832614387
I've a similar issue on AKS 1.21.
I get:
Error: csi-azurefile-vsc failed to create kubernetes rest client for update of resource: resource [snapshot.storage.k8s.io/v1/VolumeSnapshotClass] isn't valid for cluster, check the APIVersion and Kind fields are valid
with the following code:
resource "kubectl_manifest" "volume_snaphot_class" {
yaml_body = file("${path.module}/test.yaml")
}
test.yaml
contains:
---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: csi-azurefile-vsc
driver: "file.csi.azure.com"
deletionPolicy: Delete
On the other hand, kubectl apply
works fine:
$ kubectl apply -f test.yaml
volumesnapshotclass.snapshot.storage.k8s.io/csi-azurefile-vsc unchanged
HTH