terraform
terraform copied to clipboard
"Variables may not be used here" for `prevent_destroy`
Terraform Version
Terraform v0.12.6
Terraform Configuration Files
locals {
test = true
}
resource "null_resource" "res" {
lifecycle {
prevent_destroy = locals.test
}
}
terraform {
required_version = "~> 0.12.6"
}
Steps to Reproduce
terraform init
Description
The documentation notes that
[...] only literal values can be used because the processing happens too early for arbitrary expression evaluation.
so while I'm bummed that this doesn't work, I understand that I shouldn't expect it to.
However, we discovered this behavior because running terraform init failed where it had once worked. And indeed, if you comment out the variable reference in the snippet above, and replace it with prevent_destroy = false, it works - and if you then change it back it keeps working.
Is that intended behavior? And will it, if I do this workaround, keep working?
Debug Output
λ terraform init
2019/08/21 15:48:54 [INFO] Terraform version: 0.12.6
2019/08/21 15:48:54 [INFO] Go runtime version: go1.12.4
2019/08/21 15:48:54 [INFO] CLI args: []string{"C:\\Users\\Tomas Aschan\\scoop\\apps\\terraform\\current\\terraform.exe", "init"}
2019/08/21 15:48:54 [DEBUG] Attempting to open CLI config file: C:\Users\Tomas Aschan\AppData\Roaming\terraform.rc
2019/08/21 15:48:54 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2019/08/21 15:48:54 [INFO] CLI command args: []string{"init"}
There are some problems with the configuration, described below.
The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.
Error: Variables not allowed
on main.tf line 7, in resource "null_resource" "res":
7: prevent_destroy = locals.test
Variables may not be used here.
Error: Unsuitable value type
on main.tf line 7, in resource "null_resource" "res":
7: prevent_destroy = locals.test
Unsuitable value: value must be known
Hi @tomasaschan,
prevent_destroy cannot support references like that, so if you are not seeing an error then the bug is that the error isn't being shown; the reference will still not be evaluated.
Just ran into this but with a "normal" variable. It would be create if we can use variables in the lifecycle block because without using variables I'm literally unable to use prevent_destroy in combination with a "Destroy-Time Provisioner" in a module.
I'm hitting this, too. Please allow variables derived from static values to be used in lifecycle blocks. This would let me effectively use modules to run dev & test environments with the same config as prod, while providing deletion protection for prod resources. AWS RDS has a deletion_protection option that is easy to set. S3 Buckets have an mfa_delete option which is difficult to enable. I found no way to prevent accidental deletion of an Elastic Beanstalk Application Environment.
module "backend" {
source = "../backend"
flavor = "dev"
...
}
resource "aws_elastic_beanstalk_environment" "api_service" {
lifecycle {
prevent_destroy = (var.flavor == "prod") // <-- error
}
...
}
Seen multiple threads like this. There is an ongoing issue (https://github.com/hashicorp/terraform/issues/3116) which is currently open but @teamterraform seem to have made that private to contributors only. The need to set lifecycle properties as variables is required in a lot of production environments. We are trying to give our development teams control of their infrastructure whilst maintaining standards using modules. Deployment is 100% automated for us, and if the dev teams need to make a change to a resource, or remove it then that change would have gone through appropriate testing and peer review before being checked into master and deployed.
Our modules need to be capable of having lifecycle as variables. Can we get an answer as to why this is not supported?
My use case is very much like @weldrake13's. It would be nice to understand why this can't work.
I would also appreciate if Terraform allows variables for specifying "prevent_destroy" values. As a workaround, since we use the S3 backend for managing our Terraform workspaces, I block the access to the Terraform workspace S3 bucket for the Terraform IAM user in my shell script after Terraform has finished creating the prod resources. This effectively locks down the infrastructure in the workspace and requires a IAM policy change to re-enable it.
I write tests for my modules. I need to be able to re-run tests over and over. There's no way for me to delete buckets in a test account and set protection in a production account. Swing and a miss on this one.
Is there a general issue open with Terraform to improve conditional support? Off the top of my head I can think of the following limitations:
- Variable defaults / declarations cannot use conditionals
- Lifecycle rules cannot use conditionals
provider =argument cannot use conditionals- Modules cannot have count set
All of these make writing enterprise-level Terraform code difficult and more dangerous.
The same of: https://github.com/hashicorp/terraform/issues/3116 Can you close, please?
The same of: #3116 Can you close, please?
Hashicorp locked down 3116. If this gets closed then those following cant view the issue.
It's over 4 years since #3116 was opened, I think we'd all appreciate some indication of where this is? Is it still waiting on the proposal mentioned in this comment, #4149 ?
Thought I'd offer up a work around I've used in some small cases. Example here is a module for gcloud sql instance, where obviously in production I want to protect it, but more ephemeral environments I want to be able to pull the environment down without editing the code temporarily.
It's not pretty but it works, and is hidden away in the module for the most part:
### variables.tf
variable "conf" {
type = map(object({
database_version = string
...
prevent_destroy = string
}))
description = "Map of configuration per environment"
default = {
dev = {
database_version = "POSTGRES_9_6"
...
prevent_destroy = "false"
}
# add more env configs here
}
}
variable "env" {
type = string
description = "Custom environment used to select conf settings"
default = "dev"
}
### main.tf
resource "google_sql_database_instance" "protected" {
count = var.conf[var.env]["prevent_destroy"] == "true" ? 1 : 0
...
lifecycle {
prevent_destroy = "true"
}
}
resource "google_sql_database_instance" "unprotected" {
count = var.conf[var.env]["prevent_destroy"] == "false" ? 1 : 0
...
lifecycle {
prevent_destroy = "false"
}
}
### outputs.tf
output "connection_string" {
value = coalescelist(
google_sql_database_instance.protected.*.connection_name,
google_sql_database_instance.unprotected.*.connection_name,
)
description = "Connection string for accessing database"
}
Module originated prior to 0.12, so those conditionals could well be shortened using bool now. Also I appreciate this is one resource duplicated, and it would be much worse elsewhere for larger configurations.
It is so funny. I am asking this question WHY? WHY?
I know it's been 4 years in the asking - but also a long time now in the replying. Commenting on #3119 was locked almost 2 years ago saying "We'll open it again when we are working on this".
Can someone with the inner knowledge of this "feature" work please step up and give us some definitive answers on simple things like:
- If this will be done?
- Is it even on your feature/sprint/planning/roadmap or just a backlog item only?
- When may be expected if it IS on the roadmap?
Thanks for your work - Hashicorp - this tool is awesome! Not slanting at you, just frustrated that this feature is languishing and I NEED it ... Now....
@Penumbra69 and all the folks on here: I hear you, and the use cases you're describing totally make sense to me. I'm recategorizing this as an enhancement request because although it doesn't work the way you want it to, this is a known limitation rather than an accidental bug.
Hi team
Maybe a duplicate of https://github.com/hashicorp/terraform/issues/3116 ?
@danieldreier given that Hashicorp has acknowledged this issue as a "known limitation" based on your June 12, 2020 comment, is the company able to provide a standard or recommended workaround to address this?
I think the recommended workaround is find-and-replace value before running terraform :(
I think the recommended workaround is find-and-replace value before running terraform :(
Wow this is a real problem so either we duplicate all resources with prevent_destroy, you we use m4 or something to do a search for this (like you have to do with Dockerfiles. pretty ugly :-)
Ahh i tried using dynamic didnt work either. There is another issue that could be similar to this one at https://github.com/hashicorp/terraform/issues/24188
makes the prevent_destroy pretty painful to use with a dev/prod env for multiple resources. Would like to use it with an assertion something like:
locals {
is_prod = replace(lower(var.environment), "prod", "") != lower(var.environment)
}
module "prod_lock" {
source = "rhythmictech/errorcheck/terraform"
version = "1.0.0"
assert = local.is_prod == false || var.prevent_destroy == true
error_message = "you are potentially attempting to destroy prod. Please stop."
}
resource "null_resource" "really important" {
lifecycle {
prevent_destroy = var.prevent_destroy
}
depends_on [
module.prod_lock
]
}
I'm in dire need of this functionality support. I'm having to find and replace instead of just setting it from a local or a var. I'm not sure why this is such a point of contention to get working but I'm sure there's more to it than what I can see, so any additional information on the progress or lack thereof would be greatly appreciated!
we're also looking forward to use variables in lifecycle blocks.
example:
set prevent_destroy to true per default ... overwrite it in our dev environment on a case by case basis
Does using deletion_protection circumvent this issue in restrictive lifecycle blocks? https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance#deletion_protection
Just ran across this as well. How disappointing. Maybe my "+1" comment will encourage Hashicorp to re-evaluate this.
Does using
deletion_protectioncircumvent this issue in restrictive lifecycle blocks? https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance#deletion_protection
Hardly. That's a feature of one very specific resource. prevent_destroy is applicable to any Terraform resource.
Looks like this is still an issue.
I am still facing this issue in 2021. Any suggested workarounds
As a possible workaround resources can be duplicated and conditionally created based on a variable:
resource "aws_s3_bucket" "example" {
count = var.prevent_destroy ? 1 : 0
bucket = "example"
lifecycle {
prevent_destroy = true
}
# ...
}
resource "aws_s3_bucket" "example_no_prevent_destroy" {
count = var.prevent_destroy ? 0 : 1
bucket = "example"
lifecycle {
prevent_destroy = false
}
# ...
}
+1 for requested feature and to keep this alive.
I know it's (probably?) more complicated than it sounds but I almost spit out my coffee when I read ctrl-f.
Wow! This came as a surprise that using a variable for prevent_destroy is not supported! Makes complex environments super difficult!