terraform
terraform copied to clipboard
Support for dynamic blocks and meta-arguments
Afternoon,
FR to allow the dynamic blocks capability to work with resource meta-arguments.
Current Terraform Version
0.12.20+
Use-cases
The use case Im trying to implement is a simple one.
I would like to add lifecycle
meta-argument to a resouce when our var.ENVIRONMENT
== "prod". ie, stop the pipeline destroying prod resources.
Attempted Solutions
# Here we set a lifecycle block to include `prevent_destroy=true` when `var.ENVIRONMENT=prod`.
dynamic "lifecycle" {
for_each = var.ENVIRONMENT == "prod" ? [{}] : []
content {
prevent_destroy = true
}
}
Result of the above is:
Error: Unsupported block type
on main.tf line 25, in resource "azurerm_resource_group" "rgnamegoeshere":
25: dynamic "lifecycle" {
Blocks of type "lifecycle" are not expected here.
Proposal
Support meta-arguments for use with dynamic blocks. Im sure its really easy to. jk.
References
Similar request in Terraform Core discussion: https://discuss.hashicorp.com/t/dynamic-lifecycle-ignore-changes/4579/4
Another use case would be if we sometimes want to ignore a field, like "master_password or similar.
Another way solving both these use cases (I guess?) could be if variables would be allowed in lifecycle blocks, something like:
locals {
destroy = var.ENVIRONMENT == "prod" ? true : false
}
lifecycle {
ignore_changes = var.list_with_changes_to_ignore
prevent_destroy = local.destroy
}
It would be very useful in any case
+1 for any of these. It would be really useful if we could manipulate lifecycle rules via variables or dynamic blocks.
+1 for this enhancement. In my case I want to support two different major provider versions. In the old one there is field which is required but in the newest it doesn't exist.
In my case, I'm creating a GKE module that could use a release channel or not, so in one scenario I need to ignore both min_master_version
and node_version
, while when release_channel == "UNSPECIFIED"
I do not want to ignore them...
It would look like something as
data "google_container_engine_versions" "location" {
location = "southamerica-east1"
project = "leviatan-prod"
version_prefix = var.kubernetes_channel != "UNSPECIFIED" ? var.kubernetes_version : ""
}
resource "google_container_cluster" "cluster" {
provider = google-beta
# [...]
min_master_version = var.kubernetes_channel != "UNSPECIFIED" ? var.kubernetes_version : data.google_container_engine_versions.location.latest_master_version
node_version = var.kubernetes_channel != "UNSPECIFIED" ? var.kubernetes_version : data.google_container_engine_versions.location.latest_master_version
release_channel {
channel = var.kubernetes_channel
}
lifecycle {
ignore_changes = [
var.release_channel != "UNSPECIFIED" ? min_master_version : null,
var.release_channel != "UNSPECIFIED" ? node_version : null,
]
}
}
By the way, seems that using both the channel and the image versions yields some computed nulls
in the resource's code, but this is not a problem (in case the current channel versions are used) nor part of the scope of the discussion...
Anyhow, these are the errors:
Error: Invalid expression
on main.tf line XXX, in resource "google_container_cluster" "cluster":
XXX: var.kubernetes_channel != "UNSPECIFIED" ? min_master_version : null,
A single static variable reference is required: only attribute access and
indexing with constant keys. No calculations, function calls, template
expressions, etc are allowed here.
Error: Invalid expression
on main.tf line YYY, in resource "google_container_cluster" "cluster":
YYY: var.kubernetes_channel != "UNSPECIFIED" ? node_version : null,
A single static variable reference is required: only attribute access and
indexing with constant keys. No calculations, function calls, template
expressions, etc are allowed here.
+1
+1
+1
+1
+1
+1
Any chance this would be implemented? We want to introduce something like blame step in CI/CD to re-tag only changed resources with various info from build and it seems that boilerplate that needs to be included with each and every resource in lifecycle.ignore_changes is obnoxious.
@danieldreier It would be a significant added benefit even if the block were limited to evaluating expressions that are not dependent on any state or resources, such as directly set variables and functions of them. This would still allow the expression to be evaluated very early in the processing, but at the same time allow option flags.
Note to other people reading this: please do not add "+1" comments. Instead, click on the icon at the bottom of the issue statement.
Another use case is I have a module for a lambda function. Most of the time, I want to ignore changes to the actual code of the lambda, because that is managed outside of terraform. But in a few cases, terraform should manage the code as well, so I don't want to ignore changes.
I also tried doing something like:
dynamic "lifecycle" {
for_each = var.manage_code ? [] : [1]
content {
ignore_changes = [
filename,
runtime,
handler,
source_code_hash,
]
}
}
But then I get an error that Blocks of type "lifecycle" are not expected here
. And of course modules don't support lifecycle's either....
The only way I can find to do this is to repeat all of the configuration withe the only change being the lifecycle.
Hi,
seems like this is just the fresh version of this issue (both dynamic blocks or variable interpolation would do the trick for most of us I guess).
@apparentlymart The lack of this functionality is a real problem, we can't easily secure our production resources (we obviously do not want some of them to be destroyed) while keeping flexibility for our non-production environments (if I tell Terraform not to delete anything, how is the CI/CD supposed to clean up old environments ?). This is a real production issue, definitely not an improvement or feature request. It makes the whole lifecycle system flawed, and it should be considered as a bug in it. I just can't understand why Hashicorp can't even give us a clear answer on this. It's been 5 years.
Terraform should definitely allow this, and we need to know when this could land.
And PLEASE people, do not add "+1" and noise on this issue. That's what closed the previous ticket, and that's why we never got any response.
@Skyfallz some feature requests might never be implemented, we have to accept that and move on with the workarounds I believe. One suggestion for everyone who struggles with this, is put an easy to 'sed' placeholder to that location (like LCYCLE_REPLACEME_LIST), and run a 's/find/replace/g' every time before running terraform (if it's a CI/CD pipeline and the modified tf files get discarded in that build job anyway).
@Dmitry1987, besides the fact that that is an incredibly awkward workaround, that doesn't solve the problem if the lifecycle is in a module that is used multiple times in the same workspace with different lifecycle requirements. The only workarounds I know of are to duplicate all the config, give up on HCL and use some other tool to generate terraform json files (which I would probably have to build myself, since I don't know of a well-established tool to do this), or use something other than terraform altogether.
@Dmitry1987 @tmccombs this workaround is not that awkward (not more awkward that the fact that we can't do it natively on TF anyway), especially if you consider to do this only in your 'destroy' step in a CI/CD (this way lifecycle blocks are still present on apply). But for sure, this is not pretty, and we should definitely have a clean solution instead. I'm working on a Terraform wrapper to handle this usecase (and some others, like https://github.com/hashicorp/terraform/issues/17599), I'll share it when it's done.
@Skyfallz how would that workaround work for the ignore_changes
example I gave above?
@tmccombs we currently investigating use of Pulumi instead of Terraform since it seems to not have these awkward issues and is much more succinct in terms of representation. Basically instead of writing wrappers around Terraform you can use JS/C#/Python to describe your infra.
I won't argue if it seems awkward to some :) but that's one possible way to do it, which I can think of (rendering all TF in JSON might be better or worse, depends on size of infra and how frequently changes are made). Never saw Pulumi, thanks @ReVolly will check this out
Oh well, the first Pulumi example reminds me using vanilla SDK of a cloud provider, so it's probably better comparison vs SDKs and not so with Terraform (which is easier to use than SDK because declarative and keeps state)
import pulumi
from pulumi_azure import core, storage
# Create an Azure Resource Group
resource_group = core.ResourceGroup("resource_group")
# Create an Azure resource (Storage Account)
account = storage.Account("storage",
resource_group_name=resource_group.name,
account_tier='Standard',
account_replication_type='LRS',
tags={"Environment": "Dev"})
# Export the connection string for the storage account
pulumi.export('connection_string', account.primary_connection_string)
I wonder how large infra looks like, probably similar to raw SDK code (like boto3 in python if someone does aws in boto)
@Dmitry1987 Pulumi really feels like a next-level Terraform (they even use its modules and have utility to convert tf=>pulumi). It keeps state as well and yeah it's deceptive a bit, because it seems like Pulumi defining actions to be performed and not describing desired state, but it is actually not the case. It's doing the same thing as tf but in imperative way, underneath it all it's very similar to Terraform concept which is to build resource graph and then create it via underlying provider. You can actually take a look at the comparison to tf here.
sounds good thanks for sharing :)
On Thu, Jul 2, 2020 at 5:41 AM Alexei Sitnikov [email protected] wrote:
@Dmitry1987 https://github.com/Dmitry1987 Pulumi really feels like a next-level Terraform (they even use its modules and have utility to convert tf=>pulumi). It keeps state as well and yeah it's deceptive a bit, because it seems like Pulumi defining actions to be performed and not describing desired state, but it is actually not the case. It's doing the same thing as tf but in imperative way, underneath it all it's very similar to Terraform concept which is to build resource graph and then create it via underlying provider. You can actually take a look at the comparison to tf here https://www.pulumi.com/terraform/.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/hashicorp/terraform/issues/24188#issuecomment-652982083, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACHPL6CLF7VGK56GNDTKFEDRZR567ANCNFSM4KZW4UFQ .
@Dmitry1987 @ReVolly To say that Pulumi is next level to terraform ist just wrong by definition. as the pulumi comparison page says: terraform is declarative (HCL) and Pulumi is programmatic ("any language"). So these two approaches are completely different and therefore cannot be compared at all (just like apples and pears)
That said, you probably could compare terraform with puppet and Pulumi with chef. Which I used (all of them) in various projects. And my experience with the programmatic approach is that the resulting code needs much more maintenance in the long run, as code evolves. Especially in the DevOps age, where all the developers care for infrastructure as well. So what Pulumi and those promote as an advantage (to be able to everything you want), quickly turns to a maintenance nightmare.
What I often perceived, when I found myself stuck using the declarative approach - saying "I would like to code this thing" - was that there was a flaw in the overall architecture that I created. So that's the maintenance effort in the declarative world: To keep the architecture up to date, which means to constantly improve it, while keeping the code readable to everyone!
+1
Another use case is ignoring load balancer target groups changes that Code deploy does that we usually ignore but having this support will let us not ignore changes made to the load balancer itself or the target groups.
+1
+1
+1
@Skyfallz some feature requests might never be implemented, we have to accept that and move on with the workarounds I believe. One suggestion for everyone who struggles with this, is put an easy to 'sed' placeholder to that location (like LCYCLE_REPLACEME_LIST), and run a 's/find/replace/g' every time before running terraform (if it's a CI/CD pipeline and the modified tf files get discarded in that build job anyway).
We used this method as well for modularity configuration limitations in the azurerm frontdoor resource. sed replacement is basically used for templating on the fly to generate the file based on user vars when the underlying tech doesn't allow this flexibility.