dependency instances are not filtered during a targeted plan or destroy
Terraform Version
Both Terraform 0.12.9 and Terraform 0.13.0:
Terraform v0.12.29
+ provider.local v1.4.0
+ provider.null v2.1.2
Terraform v0.13.0
+ provider registry.terraform.io/-/local v1.4.0
+ provider registry.terraform.io/-/null v2.1.2
+ provider registry.terraform.io/hashicorp/local v1.4.0
+ provider registry.terraform.io/hashicorp/null v2.1.2
Terraform Configuration Files
terraform.tf:
terraform {
required_version = ">= 0.12.29"
required_providers {
local = {
source = "hashicorp/local"
version = "~> 1.4.0"
}
null = {
source = "hashicorp/null"
version = "~> 2.1.2"
}
}
}
main.tf:
variable file_contents {
description = "map of filenames to contents"
type = map(string)
}
resource null_resource provisioner {
for_each = var.file_contents
triggers = {
file_id = local_file.file[each.key].id
name = each.key
}
provisioner local-exec {
command = "echo Running create provisioner for ${self.triggers.name}; sleep 5"
}
provisioner local-exec {
when = destroy
command = "echo Running destroy provisioner for ${self.triggers.name}; sleep 5"
}
}
resource local_file file {
for_each = var.file_contents
filename = "${path.root}/${each.key}"
content = each.value
}
variables.auto.tfvars:
file_contents = {
test1 = "test1"
test2 = "test2"
}
Expected Behavior
- When destroying
local_file.file["test1"], onlylocal_file.file["test1"]andnull_resource.provisioner["test1"]should be destroyed.
Actual Behavior
- When destroying
local_file.file["test1"],null_resource.provisioner["test2"]is unexpectedly destroyed in addition tolocal_file.file["test1"]andnull_resource.provisioner["test1"].
Steps to Reproduce
terraform initterraform applyterraform destroy -target=local_file.file["test1"]
Additional Context
- The use of
null_resourceis advised as a workaround for the destroy provisioner changes introduced in #24083 by @jbardin . As demonstrated, in a common use-case withfor_eachused to generate clusters of resources, were the destroy provisioner to run for all resources when a single resource is destroyed the impact could be catastrophic. - In other use-cases where, for example,
time_staticreplacesnull_resource, the destruction of one item results in the destruction of all items' associated resources (e.g. creation timestamp). terraform taint 'local_file.file["test1"]'works as expected: only thetest1related objects are touched.- Changes to the
file_contentsmap have the expected behaviour. Namely, removing an existing key or adding a new key result in the expected minimal number of changes affecting only objects directly related to that key.
References
- #24083
Thanks for the report and the simple reproduction case! I reproduced with your configuration against 0.13.0. From the logs:
2020/08/13 10:16:03 [TRACE] DestroyEdgeTransformer: local_file.file["test1"] has stored dependency of null_resource.provisioner["test1"]
2020/08/13 10:16:03 [TRACE] DestroyEdgeTransformer: local_file.file["test2"] has stored dependency of null_resource.provisioner["test1"]
2020/08/13 10:16:03 [TRACE] DestroyEdgeTransformer: local_file.file["test1"] has stored dependency of null_resource.provisioner["test2"]
2020/08/13 10:16:03 [TRACE] DestroyEdgeTransformer: local_file.file["test2"] has stored dependency of null_resource.provisioner["test2"]
2020/08/13 10:16:03 [TRACE] Completed graph transform *terraform.DestroyEdgeTransformer with new graph:
local_file.file["test1"] - *terraform.NodePlanDestroyableResourceInstance
null_resource.provisioner["test1"] - *terraform.NodePlanDestroyableResourceInstance
null_resource.provisioner["test2"] - *terraform.NodePlanDestroyableResourceInstance
provider["registry.terraform.io/hashicorp/local"] - *terraform.NodeApplyableProvider
local_file.file["test2"] - *terraform.NodePlanDestroyableResourceInstance
null_resource.provisioner["test1"] - *terraform.NodePlanDestroyableResourceInstance
null_resource.provisioner["test2"] - *terraform.NodePlanDestroyableResourceInstance
provider["registry.terraform.io/hashicorp/local"] - *terraform.NodeApplyableProvider
null_resource.provisioner["test1"] - *terraform.NodePlanDestroyableResourceInstance
provider["registry.terraform.io/hashicorp/null"] - *terraform.NodeApplyableProvider
null_resource.provisioner["test2"] - *terraform.NodePlanDestroyableResourceInstance
provider["registry.terraform.io/hashicorp/null"] - *terraform.NodeApplyableProvider
provider["registry.terraform.io/hashicorp/local"] - *terraform.NodeApplyableProvider
provider["registry.terraform.io/hashicorp/null"] - *terraform.NodeApplyableProvider
The destroy edge being added from local_file.file["test2"] to null_resource.provisioner["test1"] is counterintuitive to me, but this is the first time I've looked at the destroy edge transformer. I'm not sure what the correct approach to fixing this is.
Hi @isometry,
Thanks for reporting this. While we have allowed the targeting of individual instances within terraform, the overall dependencies of terraform are still handled at the whole resource level, so when determining what dependencies there are of the target, the entire null_resource.provisioner will be picked up. Since there are no changes to the config that indicate which null_resource.provisioner has changes, there's no way to single out null_resource.provisioner["test1"].
Using for_each here however does give you more precise control over the individual instances, and in this case we recommend altering the inputs to for_each to destroy only the instances you want. This allows terraform to correctly evaluate all the side-effects of the change, rather than -target which is only meant as a workaround for certain situations and cannot be as precise when determining changes in dependencies.
Hi @jbardin ,
Thank you for the explanation. I appreciate that the behaviour is only triggered in an edge case, but it seems such a dangerous edge case that I thought it warranted some attention.
Is the current behaviour (coarse inter-resource dependencies) something that you're looking to improve, or is this an edge case that must simply be documented and avoided?
There's a couple different aspects here, the internal dependency tracking within terraform, and the expectations about what -target is supposed to do.
For the first, there are no immediate plans to change how dependencies are tracked, since it's so fundamental to the internal working of terraform.
As for the second, it's become apparent that -target is not really flexible enough for what user's want. Often they expect it to only change the exact resource targeted and not effect dependent resources, or automatically add in only the targeted resource's dependencies and not the dependents. There's also a feature request for "inverse targeting", but then we have the inverse problem of what to include, since the excluded resource likely has some dependencies so handling them in the same way as -target wouldn't make sense.
Solving these problems probably requires a breaking change to -target behavior, or another feature entirely that allows for more precise declaration of the user's intent. I think solving the use case presented here falls into the latter category that can't be solved with the current incarnation of -target; so it is something we want to improve, but we have yet to architect that solution.
I forgot to mention as well, that what makes this the edge case it appears to be, is the use -target with destroy. If there are only updates happening, then the extra targeted instances would have no changes and hence not show up in the plan. Using destroy however is an unusual case where terraform doesn't know what to do with the extra planned instances, because destroy doesn't cause any configuration changes to compare against. Since there's generally no reason to use a targeted destroy other than as a stepping stone to a full destroy when there is some problem preventing the initial full destroy, this use might be worthy of a slightly different -target warning text briefly describing this situation.
Good clarifications, thank you. I think we should be able to avoid targeted destroys, but as I've had reason to use them in the past – and obviously didn't see the same behaviour when the destroy provisioner was directly specified within the targeted resource – the behaviour described here came as a nasty surprise as I was working out how to port our modules to Terraform 0.13 :-)
Hello @jbardin, I fell on this issue too, could you, please, enlighten my understanding of this sentence "in this case we recommend altering the inputs to for_each to destroy only the instances you want." and provide an example like isometry did ?
@ginolegigot, what that means in terms of the first example, is that if you want to delete the test1 instance, you would remove that entry from the file_contents data used in the for_each expression.
file_contents = {
test2 = "test2"
}
A normal plan will then cause that instance to be destroyed without the use of the destroy command.
Ok that was pretty simple ^^ thanks a lot!