terraform
terraform copied to clipboard
Currently selected workspace does not exist
Terraform Version
1.2.*
1.3.7
1.3.8
1.3.9
1.4.0
Terraform Configuration Files
providers:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.4.3"
}
template = {
source = "hashicorp/template"
version = "~> 2.2.0"
}
}
required_version = "~> 1.3.0"
}
backend:
terraform {
backend "s3" {
bucket = "terraform-state-1234"
role_arn = "arn:aws:iam::1111111111111111:role/state-role"
key = "terraform.tfstate"
workspace_key_prefix = "env"
region = "eu-central-1"
dynamodb_table = "terraform-state-lock"
}
}
Debug Output
https://gist.github.com/yyarmoshyk/b17cdb4b25696cd4317805c3aa5e55fc
Expected Behavior
new workspace created
Actual Behavior
The currently selected workspace (test) does not exist. This is expected behavior when the selected workspace did not have an existing non-empty state. Please enter a number to select a workspace:
- default
- dev
Steps to Reproduce
-
export TF_WORKSPACE=dev
-
terraform init
-
export TF_WORKSPACE=test
-
terraform init
Additional Context
I have a need to initialise new workspaces automatically by updating the list of environments. So basically I have a need to initialise workspace in pipeline for example for sandbox environment, next run init for dev, next run init for stage and so on.
I expect that terraform init will create new workspace if I have TF_WORKSPACE environment variable specified. But this doesn't work.
Terraform init successfully creates workspace only in the empty s3 bucket.
If terraform detects existing workspaces in the bucket, than it fails with the following error instead of creating the new workspace:
The currently selected workspace (dev) does not exist
The existing workaround seems to be redundant (init terraform with default workspace, new create workspace with terraform workspace new
, next define TF_WORKSPACE
environment variable and re-run the terraform init
)
Terraform should create new workspaces on init based on the value of TF_WORKSPACE
References
No response
Hello @yyarmoshyk ,
I think from Terraform's perspective, a backend needs to be initialized, before a workspace can be created. Hence, the expectation that terraform init
will create a new workspace if TF_WORKSPACE
environment variable is specified, may not be a correct assumption.
Terraform will not work with a non-existing workspace and it is set by design, so that Terraform can detect issues like someone was trying to run an update, but accidentally ended up creating a whole new set of infrastructure again.
For pipeline automation, you can try certain workarounds as you have mentioned, or use something like the following. The usual approach is to have something like this order, which will create the workspace if it doesn’t exist, or select the existing workspace. The key here is to set TF_WORKSPACE
after creating the workspace. This should fix the error shown in the debug log :
-
terraform init
-
terraform workspace create ${WORKSPACE}
||echo "Workspace ${WORKSPACE} already exists or cannot be created"
- export
TF_WORKSPACE=$WORKSPACE
-
terraform apply
Here is a detailed discussion on a similar issue, which may be helpful with some additional workarounds like the one explained above. https://discuss.hashicorp.com/t/help-using-terraform-workspaces-in-an-automation-pipeline-with-tf-workspace-currently-selected-workspace-x-does-not-exist/40676/2
Thanks
Terraform will not work with a non-existing workspace and it is set by design
This doesn't make any sense to me. What is the point of even using the TF_WORKSPACE
then? As what is effectively being suggested is using terraform workspace select -or-create
prior to being able to run terraform init
and has the downside of being one more additional command that must be executed (in a tool that prides itself on operating on a declarative paradigm and not an imperative one 😞)
I have the same problem. I have a sligthly different setup, though. We have different AWS s3 buckets on different AWS accounts. When changing the environment from dev -> test, for example, we also change the AWS account. This means that running those inits are done in the scope of 'previous' run. Yielding in AWS permission issues.
We can handle this, as we work from terminal, and the init asks which workspace would you like to use, when changing from one AWS account to another.
It would be really nice to have at least on option to "force" switching to a specific workspace on init. Or, use always the "default" on init. Changing of workspace could be done later on.
Just want to +1 this. Running into the same issue now. Would be super helpful is TF_WORKSPACE
created a new workspace if the TF_WORKSPACE value does not exist
So I ran into this problem by having the TF_WORKSPACE var set before doing
if ! terraform init --reconfigure -backend-config="bucket=${TERRA_STATE_BUCKET}"; then
I was setting the workspace before init cause I was getting an error when initing because it couldn't do a check on the default terraform state since I had the IAM permissions for the codebuild image set only to access the paths the build's state was in. With the workspace set it could check the state file. with no problem however, the state file had already existed at this point so I hadn't seen this "The currently selected workspace (production) does not exist." before.
I was able to fix it by removing the path restriction on the IAM permission for the S3 bucket and moving the export of the workspace to after init.
So there seems to be a little bit of an issue if you try to limit the access of terraform to the S3 bucket when you are specifically using workspaces. Having the TF_WORKSPACE var set should implicitly specify that if the workspace doesn't exist yet then it should be created or at least be assumed it will be created after init is called.
When i was debugging the S3 error it seemed like it was using default however with the backend for S3 configured to have a workspace prefix it wasn't using that as part of the default name which is why it failed with the retrictive IAM permission cause it assemed the file it was checking was at the root of the bucket.