terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
kubernetes_manifest does not respect provider dependencies
Terraform Version, Provider Version and Kubernetes Version
Terraform version: 1.2.1
Kubernetes provider version: 2.11.0
Kubernetes version: 1.22
Terraform Configuration Files
as an example, the provider is instantiated with values after the cluster gets created. Thus, the kubernetes_manifest resource should not attempt to contact the cluster until the cluster is created.
provider "kubernetes" {
alias = "euc1"
host = module.euc1[0].cluster_auth.host
cluster_ca_certificate = module.euc1[0].cluster_auth.cluster_ca_certificate
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
args = ["eks", "get-token", "--cluster-name", module.euc1[0].cluster_auth.cluster_name]
command = "aws"
}
}
Affected Resource(s)
Steps to Reproduce
- Create an EKS Cluster (or another cloud-based kubernetes cluster) and apply a random kubernetes_manifest to said cluster in the same terraform project.
- Watch the provider fail because the k8s cluster is not created.
- If you target creation of the k8s cluster and apply, and then run terraform everything works normally.
Expected Behavior
The provider should recognize that it needs to wait until the cluster is created before trying to check if the kubernetes_manifest
works.
Actual Behavior
On the plan (zero resources exist pre-plan)
│ Error: Failed to construct REST client
│
│ with module.euc1[0].module.metrics.kubernetes_manifest.securitygrouppolicy_prometheus_kube_state_metrics,
│ on modules/metrics/security_group_policies.tf line 49, in resource "kubernetes_manifest" "securitygrouppolicy_prometheus_kube_state_metrics":
│ 49: resource "kubernetes_manifest" "securitygrouppolicy_prometheus_kube_state_metrics" {
│
│ cannot create REST client: no client config
Important Factoids
This only occurs when a cloud-based k8s cluster is being created on the same run as a kubernets_manifest
is applied to the cluster.
References
- #1391
- #1453
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
This was working with version 2.7.1
but can be seen as not working in 2.11.0
and 2.10.0
Using the config below, I also see the error with the kubernetes_manifest resource:
Terraform: 1.1.7 Kubernetes module : 2.7.1 & 2.11.0 -->Tested with both.
I don't know why this is not been taken care till now. its very basic dependency management which terraform is good at. its a blocker for our infra automation. please fix this as soon as possible. Thanks.
Hi folks,
Could you please client.authentication.k8s.io/v1alpha1
with client.authentication.k8s.io/v1beta1
in the exec block?
provider "kubernetes" {
...
exec {
api_version = "client.authentication.k8s.io/v1beta1"
...
}
}
Please let me know if that works. Thank you.
we don't use exec block. we use token directly as shown below (in terraform cloud)
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}
@arybolovlev That doesn't fix it.
Creating the cluster and kubernetes_manifest
resources in the same apply operation is not supported due to the need for the provider to access the cluster API during the planning phase (hence the cluster needs to already be available).
@alexsomesan maybe you can elaborate about that "need". it would be very useful to enable kubernetes_resource
to support late initiation of the provider like it works with, for example, kubernetes_service
resource.
@alexsomesan I'm not sure why this ticket is closed, the provider should respect the dependencies, it fails even when the cluster exists.