terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
Provider produced inconsistent final plan
Terraform Version, Provider Version and Kubernetes Version
Terraform version: Terraform v1.12.0
Kubernetes provider version: v2.37.1
Kubernetes version: v1.31.7-eks-4096722
Affected Resource(s)
- kubernetes_network_policy
Terraform Configuration Files
I have several network policies defined. This is one of them, the rest are similar and they also have this kind of error.
resource "kubernetes_network_policy" "state_metrics" {
metadata {
labels = {
"app.kubernetes.io/instance" = helm_release.prometheus-stack.metadata[0].name
"app.kubernetes.io/component" = "metrics"
"app.kubernetes.io/name" = "kube-state-metrics"
}
name = "state-metrics"
namespace = kubernetes_namespace.prometheus.id
}
spec {
policy_types = [
"Egress",
"Ingress",
]
egress {
to {
pod_selector {}
}
}
egress {
ports {
port = "443"
}
to {
ip_block {
cidr = var.kube_svc_ip_range
}
}
}
ingress {
from {
pod_selector {
match_labels = {
"app.kubernetes.io/name" = "prometheus"
}
}
}
ports {
port = "8080"
}
ports {
port = "9090"
}
}
ingress {
from {
ip_block {
cidr = var.kube_svc_ip_range
}
}
}
pod_selector {
match_labels = {
"app.kubernetes.io/component" = "metrics"
"app.kubernetes.io/name" = "kube-state-metrics"
}
}
}
}
Full list is here: https://gist.github.com/rl-dzaric/4f1a7ec125a76dc2d09390823969e07c
Terraform Output
╷
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.custom-hosted-k8s-monitoring.kubernetes_network_policy.state_metrics to include new values learned so far during apply, provider "registry.terraform.io/hashicorp/kubernetes" produced an invalid new
│ value for .spec[0].egress[0].to[0].pod_selector[0].match_labels: was cty.MapValEmpty(cty.String), but now null.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
More output: https://gist.github.com/rl-dzaric/87f20a11dd8e4ef818a8896457f1f566
Debug Output
Debug output: https://gist.github.com/rl-dzaric/b82734b0bc265d7e741d90502facbb6a
Panic Output
Steps to Reproduce
- Run
terraform applyfor the first time. - Change something in the
helm_release.prometheus-stackrelease, like a value change. - Run
terraform applyagain.
Expected Behavior
The configuration should be applied without any issues. Depending helm chart might have some changes, but its name doesn't change. In any case, there is no change in the egress policies so there should be no errors.
Actual Behavior
I get the error above when I run apply after there was a change in the dependent helm release. If I run apply again then terraform produces no errors. So first apply produces the error, but the second one is fine.
Important Factoids
Issue present also in lower versions of the provider.
Also setting explicitly match_labels = {} didn't help.
References
- Maybe related to: GH-2549
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment