terraform
terraform copied to clipboard
Allow replace_triggered_by to trigger on containing instance (self) properties
Terraform Version
Terraform v1.2.8
on linux_amd64
Use Cases
We're ordering VMs from a private vmWare vRealize Automation (vRA) version 8. An existing VM will be expired by vRA after some weeks, that is powered off and scheduled for deletion. When that happens, the vRA API still returns the VM, and thus does terraform. IMHO the provider should make it easy to identify expired deployments. There is a expiration date property which is incorrectly exported: https://github.com/vmware/terraform-provider-vra/issues/436
And there is the last_request that happened inside vRA, which is the expiration action in my case, and could be used to recognize that state, by terraform.
So as a workaround for my provider (and maybe also useful for other use cases) I want terraform to recreate my resource whenever the last_request property contains an expiration event.
But adding
resource "vra_deployment" "vm" {
# ...
lifecycle {
replace_triggered_by = [ vra_deployment.vm.last_request ]
}
}
results in the following error on refresh/plan on an existing, expired deployment, instead of the desired action:
Error: no change found for vra_deployment.vm in the root module
Attempted Solutions
Currently I'm able to at least abort further processing of expired VMs with
lifecycle {
postcondition {
condition = self.last_request[0].action_id != "Deployment.Expire"
error_message = "Machine has been Expired by vRA"
}
}
But this still requires manual interaction (removal of the VM to force a new creation).
Proposal
Make replace_triggered_by able to reference properties of self i.e. the containing resource.
References
- #31685
Hi @azrdev,
Thanks for filing the issue. I understand the need to to try and workaround the problem with the provider in this case, but the replace_triggered_by feature would be not able to do this without some other significant changes in the handling of resources. The reason for the no change found ... error is that the point at which terraform must decide to replace a resource is before the provider has planned the change, so that replacement can be planned accordingly; therefor there is no change for replace_triggered_by to inspect, because the provider has not yet planned any changes. Once the provider has returned a plan with all the changes recorded, the current protocol for the resource lifecycle is that terraform must adhere to the planned changes for apply.
if there is some external data you can retrieve to detect the desired point of replacement, you could store that data in a null_resource and use replace_triggered_by referencing that null_resource. If there is no external input which can represent the change, there unfortunately would not be a good workaround other than manual replacement of the resources (though replacing many instances could be made more convenient by only needing to change the null_resource to trigger all instances).
if there is some external data you can retrieve to detect the desired point of replacement, you could store that data in a
null_resourceand usereplace_triggered_byreferencing thatnull_resource
I'm trying to implement just that (since the provider fixed the issue and now has a timestamp field for expiration date):
resource "vra_deployment" "vm" {
name = var.name
...
lifecycle {
replace_triggered_by = [ null_resource.expiration_marker ]
}
}
data "vra_deployment" "this" {
name = var.name
}
resource "null_resource" "expiration_marker" {
triggers = {
is_expired = timecmp(try(data.vra_deployment.vm.lease_expire_at, timestamp()), timestamp()) < 0
}
}
but this fails when the vm is not yet present, the data source returns "not found". Guarding it with a count would introduce a depedency cycle (and prevent use of count).
I'm raising an issue with the provider for proper filtering of expired VMs, in the meantime: any more suggestions for a workaround?
Hi @azrdev,
This type of configuration would be expected to fail for the reason given, the vra_deployment cannot be read before it is created. Attempting to force the data source to be ordered after the managed resource will result in a cycle. By "external data" here, I meant something outside of the vra_deployment resource which determines the expiration value. If the resource type has concept of "expiration", it would not be out of the ordinary for the provider to implement the replacement feature within the resource itself.
Thanks @jbardin for your assessment! Then I'll hope provider will move forward with a fix.
@azrdev, based on your comment I am going to close this issue in lieu of the provider issue (https://github.com/vmware/terraform-provider-vra/issues/473) - please let me know if I am mistaken and I can re-open this issue. It sounds like everything is working as designed on the Terraform core side, however.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.