terragrunt
terragrunt copied to clipboard
Terragrunt fails to get outputs of dependency
I get the following error when running "terragrunt apply" in module "bbb":
[terragrunt] 2020/09/08 19:46:04 /.../aaa/terragrunt.hcl is a dependency of /.../bbb/terragrunt.hcl but detected no outputs. Either the target module has not been applied yet, or the module has no outputs. If this is expected, set the skip_outputs flag to true on the dependency block.
There is however outputs. My code worked with terragrunt 0.23.36, but broke with 0.23.37. It doesn't work with 0.24.0 either. Using terragrunt version 0.13.2 during all runs.
The setup is as follows:
# module "aaa":
include {
path = find_in_parent_folders()
}
# terragrunt.hcl in parent folder
dependency "bbb" {
config_path = "/.../bbb")
}
If you want to use relative paths, you need to remove the prefix /
. Otherwise, that gets interpretted as an absolute path.
No, it's not relative paths - it's three dots in the example above, not two. I just removed my long path since it's not relevant.
Ah, I see there was one with only two - I'll update the post!
Ah gotcha. Sorry for the confusion.
Can you share how you are configuring your remote state? Does accessing the remote state depend on any special processing like hooks
and extra_arguments
, or getting plugins? If so, does it work if you set the disable_dependency_optimization = true
flag?
See the docs on dependency optimization for more info.
The dependency "bbb" is actually used to when I define the remote state...
This is from the included terragrunt.hcl:
remote_state {
backend = "s3"
config = {
bucket = dependency.bbb.outputs.s3_bucket_name
dynamodb_table = dependency.bbb.outputs.dynamodb_name
...
}
}
Will check the dependency opt tomorrow! Thanks!
Oh hmm that should work automatically... I'll need to investigate this to see what might have caused the regression.
For now, disabling the optimization should work.
I tried adding the "disable_dependency_optimization = false" to all remote_state -blocks, no change!
I spent some time digging into this, but I was unable to reproduce the particular issue you are running into.
Can you take a look at my test module and see if you can identify some differences between your setup and mine? https://github.com/gruntwork-io/terragrunt/tree/yori-investigate-dependency-regression/test/fixture-get-output/nested-optimization-dependency
Note how I am using outputs from deepdep
to feed into the remote state block for dep
.
I fixed a test case and invited you @yorinasub17 I can't post this code in public...
Thanks for providing a test case. I tried it out with tg 0.24.0, but I was unable to reproduce the issue. However, I noticed that you were using workspaces and I believe this may be the issue.
In particular, the dependency optimization will fail if the dependency is using a workspace for the state because it doesn't switch to that workspace.
So in your example, I was able to produce an issue if I use a different workspace for the S3 and dynamodb modules, but keep the default for the debug environment code that depends on it.
I was also able to successfully work around it when I added disable_dependency_optimization = true
to https://github.com/mkv-ts/terragrunt-get-outputs-bug/blob/master/inits/main/terragrunt.hcl#L25 so that it used the preconfigured workspace in the cache.
As a side note, the dependency fetching will also naturally fail if you wipe the terragrunt cache, as terragrunt doesn't know to switch to that workspace when it reinitializes the cache. So if you are wiping the cache in between runs, that can also cause this issue.
I was adding a docker image to the repo, to make sure we were running the same thing. But I couldn't recreate the issue in that new image (it still exists in our "real" docker image).
So this issue must be because of some other config we're doing somewhere... I will look into this further when I have time, and get back to you!
Hi, Not sure what was the outcome of the investigation but I face the same issue. The example structure I have looks like this
└── xxx
├── aks
│ └── terragrunt.hcl
└── aks-addon
└── terragrunt.hcl
In my "aks-addon" module terragrunt.hcl I defined dependency
dependency "aks" {
config_path = "../aks"
}
But when I try to start any terragrunt command inside aks-addon directory I get
[terragrunt] [/home/adamplaczek/xxx/aks] 2020/11/26 08:16:35 Running command: terraform init -get=false -get-plugins=false
[terragrunt] [/home/adamplaczek/xxx/aks] 2020/11/26 08:16:38 Running command: terraform output -json
[terragrunt] 2020/11/26 08:16:40 /home/adamplaczek/xxx/aks/terragrunt.hcl is a dependency of /home/adamplaczek/xxx/aks-addons/terragrunt.hcl but detected no outputs. Either the target module has not been applied yet, or the module has no outputs. If this is expected, set the skip_outputs flag to true on the dependency block.
But when I enter the dependency manually and start
terraform init -get=false -get-plugins=false
terraform output -json
I can see the output.
If I set disable_dependency_optimization = true
on the remote_state block it works.
I'm using Azure
Sadly I get the same problem but for me the disable_dependency_optimization = true
doesn't fix. Basically I can't delete the alb because it can't get outputs of it's security group. So I'm left with a manual delete which is odd.
As a side note, the dependency fetching will also naturally fail if you wipe the terragrunt cache, as terragrunt doesn't know to switch to that workspace when it reinitializes the cache. So if you are wiping the cache in between runs, that can also cause this issue.
I thought https://terragrunt.gruntwork.io/docs/features/caching/ meant we could safely get rid of these or are you talking about something else?
We triggered the issue when we migrated our remote state files (S3, Dynamo for locking) to another location. Before that, relative dependencies were working fine but now plan
and apply
are failing. Applying in the dependency folder generate outputs but the downstream module don't see them like it was used to do before.
The disable_dependency_optimization = true
workaround unblocked us. We also tried to change relative path to absolute but it didn't helped.
terraform = 0.12.29 terragrunt = 0.26.2
Still reproducing this with: terraform 0.15.2 terragrunt 0.29.2
disable_dependency_optimization = true does not help
My modules was in plan stage.
I get rid of: mock_outputs_allowed_terraform_commands = ["validate"]
and left only: mock_outputs = { .... }
Started to work.
// module
include {
path = find_in_parent_folders("account.hcl")
}
locals {
root_vars = read_terragrunt_config(find_in_parent_folders())
region = local.root_vars.locals.default_region
}
terraform {
source = "../../../../../modules//module"
}
generate "provider" {
path = "provider.tf"
if_exists = "overwrite"
contents = <<-EOF
provider "aws" {
region = "${local.region}"
}
EOF
}
// deepmodule
include {
path = find_in_parent_folders("account.hcl")
}
locals {
root_vars = read_terragrunt_config(find_in_parent_folders())
region = local.root_vars.locals.default_region
}
terraform {
source = "../../../../../modules//deepmodule"
}
dependency "vpc" {
config_path = "../vpc"
// mock_outputs_allowed_terraform_commands = ["validate"] <- not working
mock_outputs = {
internal_vpc = { id = "MOCK"}
internal_public_subnets = { ids = ["subnet-MOCK"] }
internal_private_subnets = { ids = ["subnet-MOCK"] }
}
}
inputs = {
vpc_id = dependency.vpc.outputs.internal_vpc.id
network = {
public_subnets = tolist(dependency.vpc.outputs.internal_public_subnets.ids)
private_subnets = tolist(dependency.vpc.outputs.internal_private_subnets.ids)
}
}
generate "provider" {
path = "provider.tf"
if_exists = "overwrite"
contents = <<-EOF
provider "aws" {
region = "${local.region}"
}
EOF
}
The issue occurred for us after updating from terraform 0.13
to 0.15
and terragrunt from 0.23
to 0.29
.
:point_right: We could fix the issue by deleting the .terraform
directory in the dependend project. Now it works even without disable_dependency_optimization = true
.
This might also explain why @yorinasub17 could not reproduce the issue.
I had the same issue and I had the following configuration that caused this bug for me:
config = {
key = "${path_relative_to_include()}/terraform.tfstate"
resource_group_name = get_env("REMOTE_STATE_RESOURCE_GROUP", "rg-terragrunt-backend-state")
storage_account_name = get_env("REMOTE_STATE_STORAGE_ACCOUNT", "stterragruntstate")
container_name = get_env("REMOTE_STATE_STORAGE_CONTAINER", "terragrunt")
}
This means, the backend configuration is dynamic and determined by environment variables. However, I needed to run terragrunt from an automation pipeline (in our case Azure pipelines), and then the same backend configuration is used for the modules and all the dependencies.
I think a better practice is to hardcode the configuration in terragrunt.hcl
to seamlessly work with different state files for different modules:
config = {
key = "${path_relative_to_include()}/terraform.tfstate"
resource_group_name = "rg-terragrunt-backend-state"
storage_account_name = "stterragruntstate"
container_name = "deployment-stamp-eu1-dev"
}
Hope it helps if someone finds himself in the same situation.
Same issue as the original post:
TG = v0.29.10 TF = v1.0.0
@naul1 FYI terragrunt does not yet officially support tf 1.0. Please follow https://github.com/gruntwork-io/terragrunt/issues/1710 for when we provide support.
I thought https://terragrunt.gruntwork.io/docs/features/caching/ meant we could safely get rid of these or are you talking about something else?
@khushil the reason the cache is no longer ephemeral if you are using workspaces is because the cache now contains environmental context (the enabled workspace) that is recorded on disk.
@valdestron when you say
My modules was in plan stage.
Do you mean the modules were in a clean slate with no deployed infrastructure? In that case the dependency fetching is properly giving you an error because there are no outputs (because the dependent module hasn't been applied) when your configuration expects it. Using mock_outputs
to support plan
is the proper solution for that.
@valdestron when you say
My modules was in plan stage.
Do you mean the modules were in a clean slate with no deployed infrastructure? In that case the dependency fetching is properly giving you an error because there are no outputs (because the dependent module hasn't been applied) when your configuration expects it. Using
mock_outputs
to supportplan
is the proper solution for that.
Problem continues in 2022 latest version...
mock_outputs
solution is very poor since plan
tells one thing and apply
does other. In addition, having production code with "mocks" is very "strange".
In that case the dependency fetching is properly giving you an error because there are no outputs (because the dependent module hasn't been applied) when your configuration expects it.
This was the solution for me. I was trying to test changes to infrastructure using a custom prefix, and those resources did not exist in AWS. Running init and apply on the modules supplying the outputs for the dependent modules solved this issue.
Is there any movement to this ? This is a very peculiar case since this is a classic case of terraform plan and identifying dependencies. Mock is a way out though but it is a very misguiding in plan vs apply .
Hello everyone.
I'm using the latest version (v0.38.7) and if I use mock_outputs
terragrunt always uses the values from the mock_outputs
and never detects the outputs from the dependecy, even if I set skip_outputs = false
Has anyone solved this?
Thanks.
We experienced the same issue when importing resources into terragrunt.
Apparently, when running terragrunt state pull
in the dependency path, we found that the "outputs" object in the state was empty. This can also be checked by running terragrunt output
.
Running terragrunt refresh
solves it!
Even in my environment, running terragrunt refresh
like @shbedev 's method solved this problem.
However, if the terraform project is large, for example, with many dependencies, terragrunt refresh
is very slow.
Does anyone know how to optimize the execution speed by getting only the output?
Have the same problem, tried everything but didnt get though it. For me it was a reason to not use terragrunt any more.
after terragrunt run-all refresh
i could see the outputs (without mocking)
Getting same error, I am trying to run terragrunt run-all init
in https://github.com/vallard/K8sClass/tree/master/terragrunt/stacks/stage using
terragrunt version v0.43.0
Terraform v1.3.7 on darwin_arm64
Full error
ERRO[0013] Module /Users/sanketh.bk/my_projects/k8s/eks/k8sclass/terragrunt/stacks/stage/eks has finished with an error: /Users/sanketh.bk/my_projects/k8s/eks/k8sclass/terragrunt/stacks/stage/vpc/terragrunt.hcl is a dependency of /Users/sanketh.bk/my_projects/k8s/eks/k8sclass/terragrunt/stacks/stage/eks/terragrunt.hcl but detected no outputs. Either the target module has not been applied yet, or the module has no outputs. If this is expected, set the skip_outputs flag to true on the dependency block. prefix=[/Users/sanketh.bk/my_projects/k8s/eks/k8sclass/terragrunt/stacks/stage/eks] ERRO[0013] 1 error occurred: * /Users/sanketh.bk/my_projects/k8s/eks/k8sclass/terragrunt/stacks/stage/vpc/terragrunt.hcl is a dependency of /Users/sanketh.bk/my_projects/k8s/eks/k8sclass/terragrunt/stacks/stage/eks/terragrunt.hcl but detected no outputs. Either the target module has not been applied yet, or the module has no outputs. If this is expected, set the skip_outputs flag to true on the dependency block.