terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
Kubernetes provider 2.20.0 try to load kubeconfig file during terraform plan and fails as it doesn't exist yet
Terraform Version, Provider Version and Kubernetes Version
Terraform version: 1.4.6
Kubernetes provider version: 2.20.0
Kubernetes version: v1.25.7+k3s1
Affected Resource(s)
Any Kubernetes resource.
Terraform Configuration Files
terraform {
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.20.0"
}
}
provider "kubernetes" {
config_path = "<path-to-my-kubeconfig-file>"
}
resource "kubernetes_manifest" "root_application" {
manifest = yamldecode("${file("${path.module}/argocd/root-application.yaml")}")
}
Debug Output
'config_path' refers to an invalid path: "./k3s-config.yaml": stat ./k3s-config.yaml: no such file or directory
Steps to Reproduce
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
There really isn't much to go on here. The debug output section doesn't reflect the realistic output of the provider.
Please provide debug output using the TF_LOG=debug environment variable in order to allow diagnosing this issue further.
On top of that, please share the exact value of the config_path attribute in the provider block. Did you set a verbatim string value or is it being sourced from some other attribute / module output?
I prerared a short sample terraform-k8s-sample.zip
And here is the logs of terraform plan command with TF_LOG=debug environment variable - terraform-k8s-sample-logs.txt
Has anyone also encountered this problem? Any progress with it?
I guess it is the same problem again: https://github.com/hashicorp/terraform-provider-kubernetes/issues/1142
Yes, seems to be. This is strange that no cares about it %)
@alex-samuilov According to the TF configuration you shared in terraform-k8s-sample.zip, I can see that the k8s-config.yaml is being generated through a local_file resource that is part of the same configuration. This means that it will get created during the apply operation.
However, one critical difference with using kubernetes_manifest resources is that they require the provider block to be fully configured and point to a working API server during the plan phase. In the configuration you shared, the k8s-config.yaml file would not have been created yet during the planning phase and thus the provider cannot contact the API as needed.
The recommendation here is to make sure the k8s-config.yaml is present before running any operations on configurations that include kubernetes_manifest resources.
@alexsomesan I got it, thanks for the answer. It turns out that there is no way to make such a configuration in Terraform, in which Kubernetes itself would be installed first, and then the workloads would be installed (this is what I'm trying to do).
What operations does terraform plan do that it needs access to the Kubernetes API server?
It would be great if the user could ignore access checks for the Kubernetes API server.