terraform-provider-kubectl
terraform-provider-kubectl copied to clipboard
Resources defined with kubectl_path_documents not picked up by documents
It looks like my resources defined with kubectl_path_documents is not being picked up when called by a module.
Running terraform apply creates the correct number of resources when in the module directory.
Running terraform apply from the root directory project that is sourcing the module ignores all resources created from path_documents
I am not sure if this is intended or a bug.
on v1.14.0
Using ${path.module}/manifests/*.yaml
works
I'm having the same issue with kubectl_path_documents. I tried as @lifelofranco suggested , the terraform plan is not showing the resource "kubectl_manifest" "crd {} to be created and data call is not getting populated. I'm using terraform 1.0.1 and kubectl provider plugin v1.14.0
data "kubectl_path_documents" "manifests" {
pattern = "${path.module}/manifests/*.yaml"
vars = {
karpenter-launch-template-name = module.karpenter.karpenter-launch-template-name
cluster_name = module.eks.cluster_name
requirement_cpu_limit = 1000
ttlSecondsAfterEmpty = 30
}
}
resource "kubectl_manifest" "crd" { wait = true count = length(data.kubectl_path_documents.manifests.documents) yaml_body = element(data.kubectl_path_documents.manifests.documents, count.index) }
When I don't use data "kubectl_path_documents" {} and copy & paste the contents of my yaml file inline with resource "kubectl_manifest" "crd" as below It works. I can see the resource "kubectl_manifest" "crd" {} to be created in the plan
resource "kubectl_manifest" "provisioner_with_csg_launch_template" { yaml_body = <<-YAML apiVersion: karpenter.sh/v1alpha5 kind: Provisioner metadata: name: default spec: requirements: - key: karpenter.sh/capacity-type operator: In values: ["spot"] limits: resources: cpu: 1000 provider: launchTemplate: ${module.karpenter.karpenter-launch-template-name} subnetSelector: karpenter.sh/discovery: ${module.eks.cluster_name} tags: karpenter.sh/discovery: ${module.eks.cluster_name} ttlSecondsAfterEmpty: 30 YAML
}
@b2jude I am going to assume that this provider is not being properly supported anymore. I only found out once attempting to create our 'staging' environment that this provider does not function and the resources aren't being picked up by Terraform, despite defining them properly in my module.
You could attempt to use the official Kubernetes provider here (https://github.com/hashicorp/terraform-provider-kubernetes).
However, you'd have to define your K8s manifests in HCL rather than YAML as when you try theyamldecode(file())
functions, it freaks out when you have multi-doc YAMLS split by ---
.
As a last attempt, I just created a null_resource
and local-exec'd a kubectl apply -f
call to deploy my newest manifests.
Anyways, that's just my two cents.
Facing same issue, this code doesn't work
data "kubectl_path_documents" "docs" {
pattern = "${path.module}/manifests/*.yaml"
}
resource "kubectl_manifest" "argocd" {
count = length(data.kubectl_path_documents.docs.documents)
yaml_body = element(data.kubectl_path_documents.docs.documents, count.index)
override_namespace = "argocd"
}
│ Error: Invalid count argument
│
│ on modules\helm_argocd\main.tf line 13, in resource "kubectl_manifest" "argocd":
│ 13: count = length(data.kubectl_path_documents.docs.documents)
│
│ The "count" value depends on resource attributes that cannot be determined
│ until apply, so Terraform cannot predict how many instances will be```
Do you have any updates on this problem? Are there any replacement ideas? This was working so well for me, and It just started to stop working on the new setup as it still works in the area you are doing an update of the existing deployment.
do not use count. There was an error in the documents on gavinbunney version. Long story short, if you are using count, and then remove one of the documents, it will likely cause cascate delete/recreate because you have indexed your documents by the index number. If you are using foreach, then documents are indexed by the filename, so if you remove a file, only that manifest will be removed.
See https://github.com/alekc/terraform-provider-kubectl/issues/50 discussion.
TlDR As per https://registry.terraform.io/providers/alekc/kubectl/latest/docs/data-sources/kubectl_path_documents#load-all-manifest-documents-from-a-folder-via-for_each-recommended try this
data "kubectl_path_documents" "manifests-directory-yaml" {
pattern = "./manifests/*.yaml"
}
resource "kubectl_manifest" "directory-yaml" {
for_each = data.kubectl_path_documents.manifests-directory-yaml.manifests
yaml_body = each.value
}