terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
ignore_changes not working for kubernetes_horizontal_pod_autoscaler
Terraform Version, Provider Version and Kubernetes Version
Terraform version: v1.0.5
Kubernetes provider version: v2.4.1
Kubernetes version: 1.20.9
Affected Resource(s)
- kubernetes_horizontal_pod_autoscaler
Terraform Configuration Files
resource "kubernetes_horizontal_pod_autoscaler" "pod-autoscaler" {
metadata {
name = "pod-autoscaler-tasks-service"
namespace = "default"
}
spec {
max_replicas = 10
min_replicas = 1
target_cpu_utilization_percentage = 400
scale_target_ref {
kind = "Deployment"
name = "tasks-service"
api_version = "apps/v1"
}
}
lifecycle {
ignore_changes = [
metadata[0].resource_version
]
}
}
Debug Output
N/A
Panic Output
N/A
Steps to Reproduce
terraform plan
Expected Behavior
The resource version change should have been ignored
Actual Behavior
The resource version change is showing in terraform plan output
# kubernetes_horizontal_pod_autoscaler.pod-autoscaler has been changed
~ resource "kubernetes_horizontal_pod_autoscaler" "pod-autoscaler" {
id = "default/pod-autoscaler-tasks-service"
~ metadata {
name = "pod-autoscaler-tasks-service"
~ resource_version = "32787" -> "35571"
# (5 unchanged attributes hidden)
}
# (1 unchanged block hidden)
}
Important Factoids
No, we use workspaces, variable files and a remote state.
References
https://github.com/hashicorp/terraform-provider-kubernetes/issues/473 https://github.com/hashicorp/terraform-provider-azurerm/issues/8563
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.