terraform
terraform copied to clipboard
configuration_aliases in child module terraform validate fails: Provider configuration not present
Terraform Version
v0.15.0
Terraform Configuration Files
terraform {
required_version = ">= 0.15.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
configuration_aliases = [ aws.replica ]
}
}
}
...
resource "aws_kms_key" "replica_bucket_key" {
provider = aws.replica
...
}
...
Expected Behavior
Expected to see valid configuration errors for any resource referencing the alias provider.
Actual Behavior
Errors on all resources using the alias provider. What I find interesting is that it says that the resources are in state, but there's no state, per terraform show
.
PS C:\REDACTED\s3> terraform version
Terraform v0.15.0
on windows_amd64
+ provider registry.terraform.io/hashicorp/aws v3.37.0
PS C:\REDACTED\s3> terraform show
No state.
PS C:\REDACTED\s3> terraform validate
╷
│ Error: Provider configuration not present
│
│ To work with aws_kms_alias.replica_bucket_key_alias its original provider configuration at provider["registry.terraform.io/hashicorp/aws"].replica is required, but it has been removed. This occurs when a provider configuration is removed while
│ objects created by that provider still exist in the state. Re-add the provider configuration to destroy aws_kms_alias.replica_bucket_key_alias, after which you can remove the provider configuration again.
Steps to Reproduce
-
terraform init
-
terraform validate
Additional Context
This is a child module that I've migrated from v0.14.4. It was originally using proxy provider configuration. I tried running a validate on directly it after adding in the configuration_aliases
setting. I'm able to run an apply on a main.tf that references it, but just not able to validate the child module.
References
Is there any update on this issue? This issue is causing a similar, although much smaller scale, impact as https://github.com/hashicorp/terraform/issues/28803 in terms of cluttering up the plans that we ask engineers to review with warnings that are not material and we cannot do anything to resolve or silence.
In our case, we have a repo that contains our shared modules. That repo has a check that runs terraform validate
on each of the modules. If we remove the empty provider block that is causing Warning: Empty provider configuration blocks are not required
, then we are forced to remove the terraform validate
check because it starts failing with Error: missing provider ...
. However, if we leave the empty provider block, we get a bunch of noise in the plans from that warning.
The preferred solution to the immediate issue would be changing terraform validate
to behave the same with
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
configuration_aliases = [ aws.foo ]
}
}
}
as it does with
provider "aws" {
alias = "foo"
}
or at least spit out a warning from validate instead of an error since I'd rather have the warning get spit out on the validate that no one looks at unless if it fails than in the plan that gets run much more frequently and is reviewed by humans.
As far as taking a step back and thinking about how Terraform is used in the wild, could we revisit adding a flag to silence all warnings (eg Warning: Empty provider configuration blocks are not required
) and notes (eg Note: Objects have changed outside of Terraform
)? I completely get that the warnings and notes are helpful when debugging stuff and appreciate the effort that the Terraform team has put into exposing this information to users. However, this additional information is not relevant in all contexts and based on the comments I've seen in related issues, it looks like a lot of people are having issues with the amount of noise Terraform is currently generating and that is even breaking popular open source automation tools.
I'm happy to help contribute in any way I can to pushing this along as long as the PR will get reviewed.
Could we not just add a -module
flag to terraform validate
, telling it to validate the code as if it's a reusable module, and not root module? This would make terraform stop worrying about missing provider configuration, and assume that the providers must be - well - provided, when the module is to be used. This way we could get rid of the "Empty provider configuration...." warning and still be able validate reusable modules in cases when we just don't have a root module...
@mkielar, the issue here happens long before validate
comes into play. In order to validate the config, the correct providers must be initialized. If the overall provider configuration is not correct, the configuration cannot be loaded at all (i.e. the error here is from loading the config, not from validate). The old behavior was incorrect for any version of terraform which could access namespaced providers, since there is no way to know that the correct provider is being used to obtain the schemas with which to validate the configuration.
For now, the only way to correctly validate a module which can accept providers as a parameter is to wrap it in a root module which defines in the required providers.
@jbardin, obviously I don't know the internals of terraform too well. However, from a user stand point:
If I do following:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
configuration_aliases = [ aws.foo ]
}
}
}
I'm basically telling terraform that there will be two providers in this module: aws
and aws.foo
. However, if I just leave it at it, terraform validate
fails with Error: missing provider provider["registry.terraform.io/hashicorp/aws"].foo
.
If however I add:
provider "aws" {
alias = "foo"
}
Then running terraform validate
for such module passes without warnings. But why?! What's the difference? I didn't tell terraform anything more than it already knew! I just declared (just using different syntax) that there will be an extra aws.foo
provider, but terraform already knew that from required_providers
section, didn't it?
It seems to me, that in both cases terraform has all the information required to properly run validation and validate without errors, yet in the first case (with missing provider
block) it somehow refuses to admit it ;). Seems like if we make terraform accept that truth, not even the -module
flag would be needed.
The provider block is not simply different syntax for the same thing. The required_providers
block defined what providers are required by the module and what they will be called, while the provider
block defines an actual configuration for a specific provider. Having a provider configuration declared within a module means we cannot expand that module into multiple instances, nor can that module later be removed from the configuration.
Older versions of terraform could treat the empty provider block as a "proxy" for a provider passed in, but there was no way to differentiate that from an actual provider declared within in the module in all cases. It was a confusing syntax overloading the meaning of the provider
block causing it to change behavior based on the context of the parent module, and led to numerous issues and support escalations.
The primary reason an empty provider block in a module was not turned into an error was due to timing, with limited releases pending to fully deprecate the behavior before 1.0.
In order to test a non-root module in this way, something must always be added; either temporary provider configuration to make it validate as if it was a root module, or call the module from a dummy root module. How to best handle this is what needs to be designed here, while also planning on how to integrate any changes into the experimental test
command.
Just published https://github.com/bendrucker/terraform-configuration-aliases-action to help with this. It generates provider
blocks to satisfy all required configuration_aliases
in the module. If you're looking to call terraform validate
from Github Actions, you can just plop this step before run: terraform validate
and validate your child module with required provider aliases as if it were a fully formed root module.
This issue still occurs in terraform v1.0.5
.
We're essentially being forced to choose between loud warnings when using terraform init
on the root module or complete failure when running terraform validate
on child modules. This is severely broken and should have raised flags during the development of Terraform 0.15/1.0.
Regardless of when this is patched, please update the tests for building the TF CLI to check whether the CLI functions break when using a wide spectrum of child modules.
This bug is very frustrating. I have just logged a support req with Hashicorp to try and get it moving.
This bug is very frustrating. I have just logged a support req with Hashicorp to try and get it moving.
It took me two weeks to get past the first line support engineer to agree its a bug. I was initially told configuration_aliases was deprecated which clearly is not the case. It is flagged with the terraform product manager now. Recommend others do the same if you have support contracts through your orgs.
Let's assume I have the following directory structure:
.
├── main.tf
└── vpc
└── main.tf
My ./vpc/main.tf
(sub module) looks like this:
terraform {
required_version = ">= 1.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.66.0"
configuration_aliases = [aws.example_alias]
}
}
}
// This resource uses the unaliased `aws` provider.
resource "aws_vpc" "unaliased" {
cidr_block = "10.0.0.0/16"
}
// This resource uses the `aws` provider with the `example_alias` alias.
resource "aws_vpc" "aliased" {
provider = aws.example_alias
cidr_block = "10.1.0.0/16"
}
This means my ./vpc/main.tf
module expects both the unaliased aws
provider and the aliased aws.example_alias
provider as input.
My ./main.tf
(root configuration) looks like this:
terraform {
required_version = ">= 1.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.66.0"
}
}
}
// Unaliased `aws` provider.
provider "aws" {
region = "us-east-1"
}
// Aliased `aws` provider with the `example_alias` alias.
provider "aws" {
alias = "example_alias"
region = "us-west-1"
}
module "vpc" {
source = "./vpc"
providers = {
// This is unecessary because it's already implied.
aws = aws,
// Explicitly define which provider will be passed into the sub module as
// the `example_alias` `aws` provider.
aws.example_alias = aws.example_alias,
}
}
In this root configuration, I'm explicitly passing both the unaliased aws
provider and the aliased aws.example_alias
provider to my sub module. That is because those are the providers my sub module expects as input.
At this point in time I can terraform init
the root configuration and terraform validate
it successfully.
Here are a few points to note.
It's not necessary to pass unaliased providers to sub modules because those are implicitly implied. I could remove the line aws = aws,
from my root configuration and things would still init
and validate
successfully.
I could pass my unaliased aws
provider as an aliased provider to the sub module by changing the line aws.example_alias = aws.example_alias,
to aws.example_alias = aws,
. The opposite is also true.
However, if I remove the line aws.example_alias = aws.example_alias,
entirely and do not satisfy the aws.example_alias
provider my sub module is asking for, then I get an error on terraform init
:
│ Error: No configuration for provider aws.example_alias
│
│ on main.tf line 22:
│ 22: module "vpc" {
│
│ Configuration required for module.vpc.provider["registry.terraform.io/hashicorp/aws"].example_alias.
│ Add a provider named aws.example_alias to the providers map for module.vpc in the root module.
Based on reading this issue multiple times, it seems the core frustration is being unable to terraform init
and terraform validate
a sub module directly.
James explained the issue fairly well here:
In order to test a non-root module in this way, something must always be added; either temporary provider configuration to make it validate as if it was a root module, or call the module from a dummy root module. How to best handle this is what needs to be designed here, while also planning on how to integrate any changes into the experimental test command.
I've always opted for the latter recommendation of creating some root configuration that calls the desired sub module and running terraform init
or terraform validate
against that. That's perhaps why I never ran into this issue before. I can agree that there should perhaps be some functionality added to terraform validate
that handles executing terraform validate
directly on a sub module. Regardless, I'll personally stick to my current workflow of adding a root configuration and doing my terraform init
and terraform validate
against that as it is more representative of how a user would interact with a given module.
@sudomateo My root configuration looks like this:
terraform {
required_version = "~> 1.1.3"
backend "s3" {
bucket = "terraform-bucker"
key = "vpn.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-lock-dynamodb"
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.63.0"
}
}
}
provider "aws" {
region = var.region
alias = "owner"
assume_role {
role_arn = var.assume-role-owner
}
}
provider "aws" {
region = var.region
alias = "accepter"
assume_role {
role_arn = var.assume-role-accepter-nj
}
}
But I have a query regarding child provider configuration. As I'm using S3 bucket for storing state so should I include backend block as well in the child module or not?
Or only this much code is enough for child module?
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.63.0"
configuration_aliases = [ aws.owner, aws.accepter ]
}
}
}
@vp393001 Welcome to the discussion! Your specific question is a bit outside the scope of this GitHub issue. In the future, questions like that are better asked in our community Discuss forums or in a separate GitHub issue. This helps keep the discussion on the GitHub issue focused on the actual topic of the GitHub issue. Regardless, here are the answers to your questions.
But I have a query regarding child provider configuration. As I'm using S3 bucket for storing state so should I include backend block as well in the child module or not?
Backend configuration should be defined in your root module only. It should not be defined in a child module as child modules are meant to be called from a root module.
Or only this much code is enough for child module?
Child modules should specify the providers it requires and the supported Terraform versions. That way, a root module can be aware of those constraints when calling the child module.
Root modules in Terraform have unfortunately always been a little different than called modules, and this behavior is a symptom of that since all of Terraform's commands assume that they are dealing with root modules, which should always include any needed provider configurations for themselves and the child modules.
I can definitely understand the use-case of wanting to validate a shared module in a way that answers the question about whether the module is valid itself, regardless of the context of where it's used. There's a similar problem for the module testing experiment, where we need a way to give a shared module all of the outside stuff it needs to actually work without modifying the module itself. In that case, we achieve that by writing a root module for each test scenario, which calls into the module under test.
As others noted further up the thread, you can follow a similar strategy to create configuration which includes a shared module for validation purposes. If you put it in a directory under tests/
then it could even double as a terraform test
case, but of course terraform test
is still experimental and so that's an optional extra benefit.
To do this, you can use a directory structure something like this:
variables.tf
main.tf
outputs.tf
tests/
valid/
main.tf
The tests/valid/main.tf
would contain something like this:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
# A placeholder provider configuration
provider "aws" {
region = "us-east-1"
}
module "m" {
source = "../.."
# (valid placeholder values for any required arguments)
providers = {
aws.replica = aws
}
}
You can then do validation like this:
-
terraform -chdir=tests/valid init
-
terraform -chdir=tests/valid validate
This gives the validate command a valid, root-module-headed configuration tree to work with, which it will then validate as a whole.
I would like to support the validation of partial configuration trees (that is, a tree where the "root module" isn't really a root module) but this would be the first situation where Terraform's configuration loader and models would need to decode and represent such a thing, and so I expect there will be some semi-disruptive restructuring to do before it would be possible.
The above is what can work with today's Terraform, and is in essence the same idea as writing a small stub program to exercise a library for testing purposes in a general-purpose language. That is the approach I'd recommend that module maintainers use today, and also consider the possibility of amortizing the work of setting that up by also using it for testing changes to your module during development, whether it be handled in bulk by terraform test
or by just manually running plan and apply in the testing-only root module.
Additional info is when a root module calls the child module as part of a for_each loop, then supplying the aliased provider inside the child module breaks the whole thing
Error: Module module.this contains provider configuration Providers cannot be configured within modules using count, for_each or depends_on.
Need a way to define/inject both a main azurerm provider, plus one other alias provider, that can be consumed by the child module as part of a for_each loop.
I tried adding configuration_aliases
under the required_providers
section, but was unable to get this to work
See below for code extract:
############################
#provider.tf
provider "azurerm" {
alias = "other_subscription"
subscription_id = "xxxxxxx-xxxxx-xxxxxx-xxxxxx-xxxxx"
features {}
}
############################
#main.tf
module "this" {
source = "../terraform"
providers = {
azurerm = azurerm.other_subscription
}
for_each = var.virtual_machines
create_availability_set = false
etc etc etc
}
############################
#versions.tf
terraform {
required_version = ">=1.0.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">=3.0.0"
}
}
}
############################
A work around I've used (in the case of just needing to run terraform validate
in CI for testing), is to have a file containing the provider, eg:
# provider.tf.validate-fix
provider "aws" {
region = "us-east-1"
alias = "useast1"
}
Then just add to the CI script / GitHub action to rename it to provider.tf
before running terraform validate
terraform validate
allows it to be defined along with the configuration_aliases
A work around I've used (in the case of just needing to run
terraform validate
in CI for testing), is to have a file containing the provider, eg:# provider.tf.validate-fix provider "aws" { region = "us-east-1" alias = "useast1" }
Then just add to the CI script / GitHub action to rename it to
provider.tf
before runningterraform validate
terraform validate
allows it to be defined along with theconfiguration_aliases
Thanks for this workaround. It helped pass (success) the CI validate
stage for the child module. Meanwhile, for the root/calling module, bumping aws provider to v4.17.1
got rid of the annoying Warning
every init
, plan
, apply
of the pipeline. Happy days!
╷
│ Warning: Empty provider configuration blocks are not required
│
│ on .terraform/modules/<redacted>/provider.tf line 15:
│ 15: provider "aws" {
│
│ Remove the aws.va provider block from module.<redacted>.
╵
Success! The configuration is valid, but there were some
validation warnings as shown above.
Any updates on this? I am currently facing the same situation as darrens280, did you find a workaround for when you have this error coming from a module that is within a count, for_each or depends_on?
I just ran into this on 1.2.6.
Could we get an update on this? I'd prefer not to create workarounds with scripts during our CI process to get around this.
The comment above, https://github.com/hashicorp/terraform/issues/28490#issuecomment-1045008605, is still the current recommendation. There are no other official updates on this issue at this time.
There's one way I can think of to almost programmatically solve this problem, and I hate it.
We're told to run tf validate
only on root modules, but Terraform doesn't provide any way for us to indicate whether a particular directory is home to a root module or only submodules.
So, if you want to guard against bad commits by requiring a successful tf validate
, you need to be able to figure out what kind of directory (root or sub) the commit touches.
If it's a root module, great, just run tf validate
.
If not, then you need to figure out which root modules import this code and run tf validate there. Also not so great, because now the scope of your validation has expanded to include errors that are not caused by the commit you want to validate.
In both cases you need to build a DAG, which means you need to scan your entire repo, because the parent-child references exist only on the parent, anything in the repo could be importing the module.
Even with all this work to map out relationships between modules, you still can't definitively say that a node with no parents is a root node. It could be an orphaned child.
Maybe you just turned down some bit of infrastructure that referenced a submodule that you want to keep around because you expect to use it in the near future. Good luck validating it, because from the DAG's point of view it is a root node, and if it specifies a required provider that isn't also defined, then your validation will fail.
Or, maybe you want to share re-usable modules, like https://github.com/dsaidgovsg/terraform-modules. You can't rely on tf validate
to guard against bad commits for such a thing.
My use case is that I've upgraded our required_version of Terraform across our whole repo. This touches 170 files. I thought I could get a base level of confidence that I haven't broken anything by doing this:
git show --name-only --pretty="" | grep .tf | xargs -n1 dirname | sort | uniq | xargs -n1 -I{} sh -c 'echo "testing {}" ; cd $repo/{} && terraform init && terraform validate'
This works on most of my repo, but not everywhere. After working through the first set of issues I was thinking to turn this validation into a commit hook, but it's become far more complicated than I expected.
Thanks for the good idea, @Stretch96 !
We're trying out a pre-commit
hook that requires manually dropping a file into a child module. It's mostly self-documenting, but that extra random file isn't ideal.
Any new plans for a fix here? This is a frustrating bug. At the very least it would be nice if we could remove the
Warning: Redundant empty provider block
that comes with adding provider blocks to resolve this
Any chance this issue is going to be fixed soon? @apparentlymart @jbardin
Why provider aliases support is so bad in Terraform for so many years and nobody cares?
Provider alias can't be passed via variable (what is even worse, support was added in 0.11 and later removed, if I'm not mistaken), that's why we have hardcode providers in modules.
When we hardcode provider alias in module - it can't be linted with terraform validate
anymore...
Do you still think at Hashicorp that people deploy infra to 1 AWS account and 1 region in 2023? I can't explain such bad provider support in any other way...
The original error is still there as of terraform v1.5.7 (latest non-alpha, non-beta) and 1.6.0-beta2. How are we supposed to handle module validation when one or more provider ar eexpected to be passed by calling code? Any official statement (and documentation updates)?
My earlier comment is the current recommendation.