terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
Failed to get namespace
Terraform version: 1.7.4 Kubernetes provider version: 2.31.0 Kubernetes version: 1.28
I have a module called 'kube' where I create the following namespace:
resource "kubernetes_namespace" "pulsar-ns" {
metadata {
name = "pulsar2"
}
}
Then in another module (which has a dependency on the kube module), so it is applied after the 'kube' module, I do:
resource "terraform_data" "prepare-pulsar-helm-release" {
provisioner "local-exec" {
command = "./prepare_helm_release.sh -k pulsar -s -n ${var.pulsar-ns}"
interpreter = ["bash", "-c"]
working_dir = path.module
}
depends_on = [var.pulsar-ns]
}
When doing terraform apply, it correctly creates the namespace, but the local-exec fails with error (even though the ns exists):
exit status 1. Output: error: failed to get namespace 'pulsar2'
│ please check that this namespace exists, or use the '-c' option to create it
Thanks for opening an issue @MonicaMagoniCom – I suspect the problem may be with your depends_on configuration. Can you try the configuration with your terraform_data resource depending on the kubernetes_namespace directly instead of the var? This would tell us if there's a bug in the provider.
Actually it is not deterministic, sometimes it happens, sometimes it works. Anyway I'm going to remove the var and let you know, thank you.
I'm still experiecing the same issue. The namespace is created in the same apply command and it seems that the resource "prepare-pulsar-helm-release" sometimes does not find the namespace even if it was created some steps before. Maybe do I have to split it into 2 terraform apply?
@jrhouston I can do some investigation on this issue for Q4, and will provide updates here depending on my findings! Are you okay with that?
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!