terraform-plugin-sdk
terraform-plugin-sdk copied to clipboard
Changes to `timeouts` only should also be detected and result into an apply
SDK version
v2.16.0
Use-cases
Deploy following config:
resource "azurerm_resource_group" "test" {
name = "mgd-test123"
location = "eastus2"
}
The default timeouts are set:
$ cat terraform.tfstate | jq '.resources | .[] | select(.type == "azurerm_resource_group") | .instances[0].private' | tr -d '"' | base64 -d | jq '.["e2bfb730-ecaa-11e6-8f88-34363bc7c4c0"]'
{
"create": 5400000000000,
"delete": 5400000000000,
"read": 300000000000,
"update": 5400000000000
}
Adding the timeouts for read = "10m", and refresh, the stored meta in state is not changed for the read timeout:
$ tf refresh
azurerm_resource_group.test: Refreshing state... [id=/subscriptions/67a9759d-d099-4aa8-8675-e6cfd669c3f4/resourceGroups/mgd-test123]
$ cat terraform.tfstate | jq '.resources | .[] | select(.type == "azurerm_resource_group") | .instances[0].private' | tr -d '"' | base64 -d | jq '.["e2bfb730-ecaa-11e6-8f88-34363bc7c4c0"]'
{
"create": 5400000000000,
"delete": 5400000000000,
"read": 300000000000,
"update": 5400000000000
}
Also, the terraform plan shows no diff.
The only way to update the timeout is to change something in the resource to trigger an apply:
$ tf apply -auto-approve
...
Terraform will perform the following actions:
# azurerm_resource_group.test will be updated in-place
~ resource "azurerm_resource_group" "test" {
id = "/subscriptions/0000/resourceGroups/mgd-test123"
name = "mgd-test123"
~ tags = {
+ "foo" = "bar"
}
# (1 unchanged attribute hidden)
+ timeouts {
+ read = "10m"
}
}
Plan: 0 to add, 1 to change, 0 to destroy.
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
$ cat terraform.tfstate | jq '.resources | .[] | select(.type == "azurerm_resource_group") | .instances[0].private' | tr -d '"' | base64 -d | jq '.["e2bfb730-ecaa-11e6-8f88-34363bc7c4c0"]'
{
"create": 5400000000000,
"delete": 5400000000000,
"read": 600000000000,
"update": 5400000000000
}
This is fine in most cases, as simply updating the timeout for the resource is meaningless, until any operation happens to the resource, i.e. when applying.
However, there is an unfortunate fact that the new timeout only takes effect after the plan stage (PlanResourceChange in context of the proto) during apply, but not before, i.e. the read (ReadResource in context of the proto) during refresh is still using the old read timeout. This causes issues like https://github.com/hashicorp/terraform-provider-azurerm/issues/14213, where users are managing a collection of same typed resources, the default timeout of it is not long enough to finish the read. In that case, changing the timeout for read doesn't work for the terraform apply/terraform plan as the refresh part will still use the old timeout, results into timeout.
The workaround for this is:
- Either make a dummy change for all these resources to trigger an apply (with
-fresh=falseto avoid refreshing) - Or recreate all these resources with timeout set before apply
Both solutions seem not ideal, it would be cool if we can detect the diff for timeouts and make an apply to update it.
Drive-by note: I'm guessing the framework's handling for this wouldn't have this issue as it does not treat the timeouts configuration different from any other attribute. The framework allows customization of no-change plans, if necessary.
If you want to try it out, here's some information:
- https://developer.hashicorp.com/terraform/plugin/framework/resources/timeouts
- https://developer.hashicorp.com/terraform/plugin/framework/migrating/resources/timeouts
I can confirm I'm facing the same issue and my timeout only changes were never picked up, at least until I forced a dummy change to the resource, but it's not always that easy to do.
I am also facing the same issue. Increasing time out does noy fix my issue and I am CAF module for private dns zone and azurerm_private_dns_zone_virtual_network_link