terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
Kubernetes Job is not replaced when spec.template.metadata is updated
Terraform version: 1.8.0
Kubernetes provider version: 2.32.0
Kubernetes version: 1.28
Affected Resource(s)
kubernetes_jobkubernetes_job_v1
Steps to Reproduce
- Provision a
kubernetes_jobresource, withspec.template.metadata.labelspopulated with some labels - Modify the labels in
spec.template.metadata.labels - Generate a plan
Expected Behavior
The plan produced should show a replacement of the kubernetes_job resource
Actual Behavior
The plan produced an update of the kubernetes_job resource, and subsequently applying it silently errors (presumably, since modifying spec.template.metadata.labels via kubectl results in an error
Important Factoids
Notably, modifying spec.template.spec does correctly perform a replacement, so it seems like an oversight in the resource in detecting when to replace vs update.
This issue can be worked around by using Terraform's lifecycle block's replace_triggered_by. For example:
locals {
labels = { foo = bar }
}
resource "null_resource" "job_recreation_trigger" {
triggers = {
labels = jsonencode(local.labels)
}
}
resource "kubernetes_job" "job" {
...
spec {
template {
metadata {
labels = local.labels
}
}
}
lifecycle {
replace_triggered_by = [null_resource.job_recreation_trigger]
}
}
Disclaimer: above workaround is not fully tested, YMMV. We ended up setting up a replace_triggered_by on the job's namespace resource, since they have the same labels.
References
Someone observed this behavior before on https://github.com/hashicorp/terraform-provider-kubernetes/issues/1091#issuecomment-947849679
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment