terraform
terraform copied to clipboard
Feature: Conditionally load tfvars/tf file based on Workspace
Feature Request
Terraform to conditionally load a .tfvars
or .tf
file, based on the current workspace.
Use Case
When working with infrastructure that has multiple environments (e.g. "staging", "production"), workspaces can be used to isolate the state for different environments. Often, different variables are needed per workspace. It would be useful if Terraform could conditionally include or load variable file, depending on the workspace.
For example:
application/
|-- main.tf // Always included
|-- staging.tfvars // Only included when workspace === staging
|-- production.tfvars // Only included when workspace === production
Other Thoughts
Conditionally loading a file would be flexible, but possibly powerfully magic. Conditionally loading parts of a .tf
/.tfvars
file based on workspace, or being able to specify different default values per workspace within a variable, could be more explicit.
Hi @atkinchris! Thanks for this suggestion.
We have plans to add per-workspace variables as a backend feature. This means that for the local backend it would look for variables at terraform.d/workspace-name.tfvars
(alongside the local states) but in the S3 backend (for example) it could look for variable definitions on S3, keeping the record of the variables in the same place as the record of which workspaces exist. This would also allow more advanced, Terraform-aware backends (such as the one for Terraform Enterprise) to support centralized management of variables.
We were planning to prototype this some more before actually implementing it, since we want to make sure the user experience makes sense here. With the variables stored in the backend we'd probably add a local command to update them from the CLI so that it's not necessary to interact directly with the underlying data store.
At this time we are not planning to support separate configuration files per workspace, since that raises some tricky questions about workflow and architecture. Instead, we plan to make the configuration language more expressive so that it can support more flexible dynamic behavior based on variables, which would then allow you to use the variables-per-workspace feature to activate or deactivate certain behaviors without coupling the configuration directly to specific workspaces.
These items are currently in early planning stages and so no implementation work has yet been done and the details may shift along the way, but this is a direction we'd like to go to make it easier to use workspaces to model differences between environments and other similar use-cases.
Awesome, look forward to seeing how workspaces evolve.
We'll keep loading the workspace specific variables with -var-file=staging.tfvars
.
@apparentlymart is there another github issue that is related to these plans? Something we could subscribe to?
I'm interested in this because we currently have a directory in our repo with env/<short account nickname>-<workspace>.tfvars
files and it's a little bit of a pain to have to remember to mention them all the time when doing plans, etc (although it's immediately obvious when you forget it on the plan and nothing looks like you expect, could be dangerous to forget it on apply though).
If these were kept in some backend-specific location, that would be great!
We just want to reference a different VPC CIDR block based on my workspace. Is there any other workaround that could get us going today?
A few common workarounds I've heard about are:
- Create a map in a named local value whose keys are workspace names and whose values are the values that should vary per workspace. Then use another named local value to index that map with
terraform.workspace
to get the appropriate value for the current workspace. - Place per-workspace settings in some sort of per-workspace configuration store, such as Consul's key/value store, and then use the above technique to select an appropriate Consul server to read from based on the workspace. This way there's only one per-workspace indirection managed directly in Terraform, to find the Consul server, and everything else is obtained from there. Even this map can be avoided with some systematically-created DNS records to help Terraform find a Consul server given the value of
terraform.workspace
. - (For VPCs in particular) Use AWS tags so systematically identify which VPC belongs to which workspace and use the
aws_vpc
data source to look one up based on tag, to obtain thecidr_block
attribute.
@apparentlymart thanks. I think option one is best. 3 doesn't work as we create the VPC with terraform in the same workspace.
@apparentlymart what is the estimated timeline for this functionality, could it be stripped down to just the tfvars and not the dynamic behaviour based on variables? It sounds like you have a pretty solid understanding of how the tfvars being loaded for a particular workspace is going to work.
Hi @james-lawrence,
In general we can't comment on schedules and timelines because we work iteratively, and thus there simply isn't a defined schedule for when things get done beyond our current phase of work.
However, we tend to prefer to split up the work by what subsystem it relates to in order to reduce context-switching, since non-trivial changes to Terraform Core tend to require lots of context. For example, in 0.11 the work was focused on the module and provider configuration subsystems because that allowed the team to reload all the context on how modules are loaded, how providers are inherited between modules, etc and thus produce a holistic design.
The work I described above belongs to the "backends" subsystem, so my guess (though definitely subject to change along the way) is that we'd try to bundle this work up with other planned changes for backends, such as the ability to run certain operations on a remote system, ability to retrieve outputs without disclosing the whole state, etc. Unfortunately all I can say right now is that we're not planning to look at this right now, since our current focus is on the configuration language usability and work is already in progress in that area which we want to finish (or, at least, reach a good stopping point) before switching context to backends.
That becomes quite hard to manage when you are dealing with multiple aws accounts and terraform workspaces
Can anyone explain what the difference is between terraform.tfvars and variables.tf file, when to use one over the other? And do you need both or just one is good enough?
[variables].tf has definitions and default values, .tfvars has overriding values if needed You can have single .tf file and several tfvars files each defining different environment
Yet another workaround (based on the @apparentlymart 's "first" workaround) that allows you to have workspace variables in different files (easier to diff). When you add new workspaces you only need to a) add the file b) add it to the list in the merge. This is horrible, but works.
workspace1.tf
locals {
workspace1 = {
workspace1 = {
project_name = "project1"
region_name = "europe-west1"
}
}
}
workspace2.tf
locals {
workspace2 = {
workspace2 = {
project_name = "project2"
region_name = "europe-west2"
}
}
}
main.tf
locals {
workspaces = "${merge(local.workspace1, local.workspace2)}"
workspace = "${local.workspaces[terraform.workspace]}"
}
output "project_name" {
value = "${local.workspace["project_name"]}"
}
output "region_name" {
value = "${local.workspace["region_name"]}"
}
Taking @matti's strategy a little further, I like having default values and only customize per workspace as needed. Here's an example:
locals {
defaults = {
project_name = "project-default"
region_name = "region-default"
}
}
locals {
staging = {
staging = {
project_name = "project-staging"
}
}
}
locals {
production = {
production = {
region_name = "region-production"
}
}
}
locals {
workspaces = "${merge(local.staging, local.production)}"
workspace = "${merge(local.defaults, local.workspaces[terraform.workspace])}"
}
output "workspace" {
value = "${terraform.workspace}"
}
output "project_name" {
value = "${local.workspace["project_name"]}"
}
output "region_name" {
value = "${local.workspace["region_name"]}"
}
When in workspace staging
it outputs:
project_name = project-staging
region_name = region-default
workspace = staging
When on workspace production
it outputs:
project_name = project-default
region_name = region-production
workspace = production
I've been thinking about using Terraform in automation and doing something like -var-file $TF_WORKSPACE.tfvars
.
can someone please give example/template of "Terraform to conditionally load a .tfvars or .tf file, based on the current workspace." Even old way is worked for me. I just wanted to run multiple infra from a single directory.
@farman022 Just use the -vars-file
command line option to point to your workspace-specific vars file.
Like @mhfs strategy but with one merge:
locals {
env = {
defaults = {
project_name = "project_default"
region_name = "region-default"
}
staging = {
project_name = "project-staging"
}
production = {
region_name = "region-production"
}
}
workspace = "${merge(local.env["defaults"], local.env[terraform.workspace])}"
}
output "workspace" {
value = "${terraform.workspace}"
}
output "project_name" {
value = "${local.workspace["project_name"]}"
}
output "region_name" {
value = "${local.workspace["region_name"]}"
}
locals {
context_variables = {
dev = {
pippo = "pippo-123"
}
prod = {
pippo = "pippo-456"
}
}
pippo = "${lookup(local.context_variables[terraform.workspace], "pippo")}"
}
output "LOCALS" {
value = "${local.pippo}"
}
is this feature added in v0.11.7
I tried creating terraform.d with qa.tfvars and prod.tfvars. then select workspace qa. On apply plan it seems that it is not detecting qa.tfvars.
No, this hasn't been added yet (current version is v0.11.8
).
While we try to follow up with issues like this in Github, sometimes things get lost in the shuffle - you can always check the Changelog for updates.
This is a resource that I have used a couple of times as a reference to setup a Makefile wrapping terraform, maybe some of you find it useful: https://github.com/pgporada/terraform-makefile
My first thouthgs were that workspaces are great for managing environments but then I found in the docs that they are not recommended. Is this still valid or the context is other?
In particular, organizations commonly want to create a strong separation between multiple deployments of the same infrastructure serving different development stages (e.g. staging vs. production) or different internal teams. In this case, the backend used for each deployment often belongs to that deployment, with different credentials and access controls. Named workspaces are not a suitable isolation mechanism for this scenario.
https://www.terraform.io/docs/state/workspaces.html
As @gudata pointed out, even I was under the impression, managing multiple environments with workspaces are ideal. But after going through the document i'm totally confused now. How do we manage multiple environments with multi region deployment with Terraform OSS.? How do we structure our terraform modules and tfvars of multiple environments
Is this still valid or the context is other
@gudata & @beingamarnath - workspaces are still a very valid way to manage multiple environments in many scenarios.
However, workspaces share a backend. If you require isolation between environments, either as a process requirement or consequence of having your environments in different accounts, you may not be able to share a backend. In this case, workspaces are not suitable, as you would need to init
the backend for each environment.
How do we structure our terraform modules and tfvars of multiple environments
@beingamarnath - Terraform provides workspaces as a mechanism to share code between multiple environments, without having to have multiple backends. It does not provide, or dictate, a structure or convention for your tfvars
or variable files. This is the essence of this issue - to introduce convention for common use cases.
Just in case someone is not aware, there is a wrapper tool, called atlantis
, which greatly helps managing different environments with terraform.
Does anyone have any updates on where this feature is? I know 0.12 is a huge priority but this functionally would be very useful for anyone using workspaces. There is the potential for human error by passing the wrong -var-file
from a completely different workspace.
@ndobbs We're using following construction:
terraform plan -var-file="var/$(tf workspace show).tfvars"
It's a handy way to evade errors. Of course, it's only suitable in case each workspace possess dedicated .tfvars
file.
If you can't trust people to remember to add the var-file addendum per @agrrh's method, here's a way to "bake it in" and will always work regardless. This is as close to a "native" implementation as it's going to get since it doesn't require any wrappers or special inputs to terraform.
The catch is that you have to write your "tfvars" as a simple json file, and reference the resulting "tfvars" as local.tfenv.variable, but the benefits are that it works even if the file isn't there, and lets you set intelligent defaults that are selectively overriden by your vars via merge().
In 0.12 you could theoretically config entire objects this way, since you can do maps of objects with the new type system, but I haven't done any significant testing around that.
main.tf
locals {
default_settings = {
numServers = 5
numDatabases = 2
}
tfsettingsfile = "tfenv-${terraform.workspace}.json"
#Workaround for https://github.com/hashicorp/terraform/issues/21395
tfsettingsfilecontent = fileexists(local.tfsettingsfile) ? file(local.tfsettingsfile) : "{}"
tfenvsettings = jsondecode(local.tfsettingsfilecontent)
tfenv = merge(local.default_settings, local.tfenvsettings)
}
output "my_tf_env" {
value = local.tfenv
}
tfenv-dev.json
{
"numServers" : 2
}
Running Terraform
>terraform workspace show
default
>terraform apply --auto-approve
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
my_tf_env = {
"numDatabases" = 2
"numServers" = 5
}
>terraform workspace select dev
Switched to workspace "dev".
>terraform apply --auto-approve
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
my_tf_env = {
"numDatabases" = 2
"numServers" = 2
}
You can write your vars as a json file under you workspace:
main.tf.json { "myvarname" : "myvalue" }
and then call it like this:
locals { vars = jsondecode(file("./terraform.tfstate.d/${terraform.workspace}/main.tf.json")) }
module "mymodule" { source = "../../mysource/mysource" myvar = local.vars.myvarname }
As of 0.12.2, you can also use YAML for this purpose, which is generally better for config settings. While you can get crazy with a lot of nested maps and lists, generally recommend keeping it a flat "ini-style" if at all possible.
Here is my "terragrunt" style setup, used typically with the free app.terraform.io but works with any backend that supports workspaces (including local). Requires no wrappers!
Main.tf
locals {
#You have to initialize any settings you plan to use to avoid a "This object does not have an attribute named" error. You can also use conditionals, this is generally easier
default_tfsettings = {
server_name = "mydefaultservername"
number_of_servers = 1
additional_setting_i_didnt_override = true
}
tfsettingsfile = "./environments/${terraform.workspace}/tfsettings.yaml"
tfsettingsfilecontent = fileexists(local.tfsettingsfile) ? file(local.tfsettingsfile) : "NoTFSettingsFileFound: true"
tfworkspacesettings = yamldecode(local.tfsettingsfilecontent)
tfsettings = merge(local.default_tfsettings, local.tfworkspacesettings)
}
output "servername" {
value = local.tfsettings.server_name
}
output "numberOfServers" {
value = local.tfsettings.number_of_servers
}
output "additional_setting_i_didnt_override" {
value = local.tfsettings.additional_setting_i_didnt_override
}
environments/test/tfsettings.yaml
server_name: testserver
environments/production/tfsettings.yaml
server_name: productionserver
number_of_servers: 200