terraform-provider-kubectl
terraform-provider-kubectl copied to clipboard
any good solutions for The "for_each" value depends on resource attributes that cannot be determined until apply
I guess this is a common issue and been discussed a lot:
I have this:
data "template_file" "app" {
template = file("templates/k8s_app.yaml")
vars = {
db_host = module.db.this_rds_cluster_endpoint # whatever resources to be created
}
}
data "kubectl_file_documents" "app" {
content = data.template_file.app.rendered
}
resource "kubectl_manifest" "app" {
for_each = data.kubectl_file_documents.app.manifests
yaml_body = each.value
}
I got:
Error: Invalid for_each argument
│
│ on k8s_app.tf line 36, in resource "kubectl_manifest" "app":
│ 36: for_each = data.kubectl_file_documents.app.manifests
│ ├────────────────
│ │ data.kubectl_file_documents.app.manifests is a map of string, known only after apply
│
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target
│ argument to first apply only the resources that the for_each depends on.
Not sure if any best practices or solutions.
At least the docs should be changed? As it seems the current example isn't working
Getting the same error, with this use case. This worked a couple of days ago with another module, note sure why is not working this way.
locals {
git_secret_name = "git-creds"
okta_secret_name = "okta-creds"
}
data "kubectl_path_documents" "external_secrets" {
pattern = "${path.module}/external-secrets.yaml"
vars = {
namespace = kubernetes_namespace.namespace.metadata[0].name,
project_id = data.google_project.project.project_id
git_secret_name = local.git_secret_name
okta_secret_name = local.okta_secret_name
}
}
resource "kubectl_manifest" "external_secrets" {
for_each = data.kubectl_path_documents.external_secrets.manifests
yaml_body = each.value
override_namespace = kubernetes_namespace.namespace.metadata[0].name
}
For now I removed all of to be computed
variables in vars
,
Instead, I created configmaps or secrets by kubernetes provider, and then reference them in k8s manifest yaml.
The workaround I found involves using the fileset
function to get a count of the number of files. As an example:
data "kubectl_path_documents" "proxy_docs" {
pattern = "${path.module}/values/proxy/*.yaml"
vars = {
namespace = kubernetes_namespace.proxy.id
}
}
resource "kubectl_manifest" "proxy_manifests" {
count = length(fileset(path.module, "/values/proxy/*.yaml"))
yaml_body = element(data.kubectl_path_documents.proxy_docs.documents, count.index)
}
Not perfect but seems to do the trick.
It would really help to convert/clone these data
objects into resource
, this would be a clean workaround.
I have the same issue with the following code.
data "template_file" "container_insights" {
depends_on = [
module.eks,
module.irsa,
helm_release.aws_vpc_cni
]
template = file("${path.module}/charts-manifests-templates/cloudwatch-insights.yaml.tpl")
vars = {
iam_role_arn = module.irsa.container_insights_fluentd[0].iam_role_arn
}
}
data "kubectl_file_documents" "container_insights" {
depends_on = [
data.template_file.container_insights,
]
content = data.template_file.container_insights.rendered
}
resource "kubectl_manifest" "container_insights" {
depends_on = [
data.kubectl_file_documents.container_insights,
data.template_file.container_insights,
]
for_each = data.kubectl_file_documents.container_insights.manifests
yaml_body = each.value
}
It's happy to plan it until you change something like add or remove files from the folder... this is insanely frustrating. :-) as @reubenavery said I have seen some providers using resources instead of data sources to work around this issue in terraform.
Does anyone know how to unblock terraform when you get into this state? Like it was working before then I removed a few files from the manifests folder an now it's angry.
The workaround I found only works for kubectl_filename_list
and not kubectl_file_documents
. You can use the equivalent fileset
function in terraform to get rid of the data source so the following:
data "kubectl_filename_list" "this" {
pattern = "${path.module}/manifests/*.yaml"
}
resource "kubectl_manifest" "this" {
for_each = { for k in data.kubectl_filename_list.this.matches : k => k }
yaml_body = templatefile(each.value, {
foo = "bar"
})
}
can be completely replaced by:
resource "kubectl_manifest" "this" {
for_each = fileset(path.module, "manifests/*.yaml")
yaml_body = templatefile("${path.module}/${each.value}", {
foo = "bar"
})
}
Sadly this does not work for file_documents so you need to have every k8s resource in a separate file.
Have the same issue with kubectl_manifest
and I noticed that the error pops-up when you have more than two kubectl_manifest
instances in your code. I have three, first two are working perfectly fine, when I add a third one, only that particular one fails, the first two will work as normal. Same code, like for like, just the vars are different.
This is literally the recommend method for using kubectl_manifest. Is there a timeframe for fixing this bug?
Here's a workaround I came up with:
locals {
crds_split_doc = split("---", file("${path.module}/crds.yaml"))
crds_valid_yaml = [for doc in local.crds_split_doc : doc if try(yamldecode(doc).metadata.name, "") != ""]
crds_dict = { for doc in toset(local.crds_valid_yaml) : yamldecode(doc).metadata.name => doc }
}
resource "kubectl_manifest" "crds" {
for_each = local.crds_dict
yaml_body = each.value
}
Super interested to see this fixed as well. Terraform fails to work completely at random.
here is some interesting notes on this https://github.com/clowdhaus/terraform-for-each-unknown
Here's a workaround I came up with:
locals { crds_split_doc = split("---", file("${path.module}/crds.yaml")) crds_valid_yaml = [for doc in local.crds_split_doc : doc if try(yamldecode(doc).metadata.name, "") != ""] crds_dict = { for doc in toset(local.crds_valid_yaml) : yamldecode(doc).metadata.name => doc } } resource "kubectl_manifest" "crds" { for_each = local.crds_dict yaml_body = each.value }
Thanks to eytanhanig, his solution worked for me. But I would like to extend it by excluding the use of local variables and adding a unique ID, which will help in my case to solve the problem of non-unique names in "yamldecode(doc).metadata.name"
resource "kubectl_manifest" "k8s_kube-dashboard" {
for_each = {
for i in toset([
for index, i in (split("---", templatefile("${path.module}/scripts/kube-dashboard.yml.tpl", {
kube-dashboard_nodePort = "${var.kube-dashboard_nodePort}"
})
)) :
{
"id" = index
"doc" = i
}
#if try(yamldecode(i).metadata.name, "") != ""
])
: i.id => i
}
yaml_body = each.value.doc
}
FWIW the "best" way I have found to replace this plugin is to define a local helm chart and use the helm_release
instead.
Basically boils down to defining a folder like:
chart/
Chart.yaml
templates/
custom.yaml
# Chart.yaml
apiVersion: v2
name: local-manifests
version: 0.0.0
type: application
and a resource like
resource "helm_release" "local" {
name = "local-manifests"
chart = "${path.module}/chart"
namespace = var.namespace
values = [
yamlencode({
# pass in whatever vars you want to your templates
})
]
}
I'm not gonna say it's ideal to do it this way, but it supports very well loading any type of k8s yaml you want to throw at it including multi-doc yaml files, or directories full of yaml.
My solution
locals {
prometheus_objects = split("\n---\n", file("${path.module}/prometheus.yaml"))
prometheus_valid_yaml = [for doc in local.prometheus_objects : doc]
prometheus_dict = { for doc in toset(local.prometheus_valid_yaml) : format("%s/%s/%s", yamldecode(doc).apiVersion, yamldecode(doc).kind, yamldecode(doc).metadata.name) => doc }
}
resource "kubectl_manifest" "prometheus" {
for_each = local.prometheus_dict
yaml_body = each.value
server_side_apply = true
}
My workaround:
resource "kubectl_manifest" "policies" {
for_each = fileset(var.policy_directory, "*.yaml")
yaml_body = file("${var.policy_directory}/${each.value}")
}