terraform-provider-kubernetes
terraform-provider-kubernetes copied to clipboard
Optionally prevent in-place resource updates
Description
The kubernetes_manifest
resource assumes that it can be updated in-place.
I have a resource managed using a kubernetes_manifest
that can't be updated in-place. It needs to be deleted and recreated with the new settings. An admission controller in kubernetes blocks in-place updates. However, there is no way to tell Terraform/the kubernetes provider not to update in place. So all applies fail and require manual intervention.
I'd like a way to mark the kubernetes_manifest
as "recreate only" or something like that.
Potential Terraform Configuration
I'd like to be able to write the following:
resource "kubernetes_manifest" "this" {
force_recreate = true
manifest = {
apiVersion = "elbv2.k8s.aws/v1beta1"
kind = "TargetGroupBinding"
metadata = {
name = var.component
namespace = var.k8s_namespace_name
}
spec = {
serviceRef = {
name = var.k8s_service_name
port = var.port
}
targetGroupARN = aws_lb_target_group.this.arn
}
}
}
The force_recreate
bit would indicate you can't update it in-place when something changes.
References
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
A change to the metadata.name
attribute of any kubernetes_manifest
resource would trigger a re-create rather than an update. You can force this to happen on every apply by introducing a random component in the name formula, like this:
resource "kubernetes_manifest" "this" {
force_recreate = true
manifest = {
apiVersion = "elbv2.k8s.aws/v1beta1"
kind = "TargetGroupBinding"
metadata = {
name = "${var.component}-${substr(uuid(),0 ,4)}"
namespace = var.k8s_namespace_name
}
spec = {
serviceRef = {
name = var.k8s_service_name
port = var.port
}
targetGroupARN = aws_lb_target_group.this.arn
}
}
}
Alternatively, you can force a replacement of a resource at plan time by passing the -replace=<resource-address>
command line argument to the terraform plan
command.
From terraform plan -help
:
-replace=resource Force replacement of a particular resource instance using
its resource address. If the plan would've normally
produced an update or no-op action for this instance,
Terraform will plan to replace it instead.
Thanks Alex! I'm sure these options work for some people. Neither of them work for me. I don't want to change the name of the resource, and I can't add the replace
flag at plan time because we use Terraform cloud. And in general, I want to be able to write terraform modules that can update their resources correctly without user intervention (so without users having to specify -replace
).
Fair enough. Can you maybe share which particular fields being changed would require the replace behaviour?
We could do something similar to the computed_fields
mechanism, but for forcing a replace on certain fields changing.
For our use case, it seems like updating any of the spec
is not allowed.
Not an immediate solution to this issue, but have a look at this new feature introduced in Terraform 1.2 called replace_triggered_by
(mentioned here: https://github.com/hashicorp/terraform/blob/v1.2/CHANGELOG.md)
Oh yes, that sounds perfect!
+1 on this.
Right now I am using a 3rd party provider kubectl_manifest
to achieve this.
Which is no good because of Hippa compliance we have to fork said repo and manage it ourselves, and also the code is a little ugly.
Unfortunately TF 1.2's replace_triggered_by
is not a solution to this. It only lets you trigger replacement by changes to other resources or their fields, not by changes to other fields of the same resource. It seems to have been designed without consideration for this use case. Attempting to specify the full path to the same resource's field results in a cycle error when planning.
It would be great if kubernetes_manifest
could gain an immutable_fields
or similar argument. My use-case is exactly the same as the OP's — we're using aws-load-balancer-controller's TargetGroupBinding
, and whenever targetGroupARN
is changed we have to manually intervene in our CI/CD process to run a plan with -replace
, because aws-load-balancer-controller's validation webhook denies any change to that field. Any custom resource with webhooks could have immutable fields that kubernetes_manifest
doesn't know about, so having an argument to inform it about them makes sense (a similar rationale to computed_fields
).
As much as possible, the configuration should codify constraints like this so that operators don't have to step in and manually kludge stuff.
Using replace_triggered_by
is possible, via the null provider. For example:
resource "null_resource" "trigger" {
triggers = {
key = "value"
}
}
resource "kubernetes_manifest" "this" {
manifest = { ... }
lifecycle {
replace_triggered_by = [null_resource.trigger]
}
}
Sorta. You would have to manually modify the null_resource
trigger value each time you change the value in kubernetes_manifest
. You can't have the null_resource
's trigger point back at the immutable field in the kubernetes_manifest
, because that will create a cycle. So it's brittle — the person modifying the kubernetes_manifest
could forget to change the null_resource
value.
If the immutable field is dependent on an attribute of some third resource this would work well though — just point both the value in kubernetes_manifest
and the trigger in null_resource
at the attribute of the third resource.
Ah if we have to keep updating a value, then I'd say replace_triggered_by
isn't a solution, because it would make writing a reusable terraform module difficult.
If you are implementing a reusable module, changing values would presumably be coming from input variables. If this is in your root module and you must use a regularly changing literal, you could define the value in the null resource and reference it in the kubernetes manifest from there, like this:
resource "null_resource" "this" {
triggers = {
key = "value"
}
}
resource "kubernetes_manifest" "this" {
manifest = {
...
someAttribute = null_resource.this.triggers.key
...
}
lifecycle {
replace_triggered_by = [null_resource.this]
}
}
If you are worried about duplicating literal values in general, a local variable might be the way to go.
This open issue https://github.com/kubernetes/kubernetes/issues/65973 could be a game changer for this one. Once implemented, the Kubernetes provider, by reading the CRD, will be able to act appropriately.
In the meantime, we all have to repetitively use some tricks ... Implemented at the client side.
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!