terraform
terraform copied to clipboard
Terraform test produces different output for each run
First of all thanks a lot for this new (experimental) feature - really looking forward to getting this one stable! Hopefully, this bug report will support you.
Terraform Version
Terraform v0.15.4
on linux_amd64
Expected Behavior
When I run terraform test
I expect the result to be stable as long as I don't change any inputs.
Actual Behavior
Everytime I start terraform test
I get different results. Sometimes the tests are passing, sometimes a few of them are failing.
Steps to Reproduce
Run terraform test
multiple times without changing any input (files) when you've got more than one test (assertion).
Additional Context
Console output: (removed the experimental warning for brevity):
❯ terraform test
─── Failed: fixed_prefix.fixed_prefix_will_not_be_used_for_hash_calculation.same_string_for_different_prefix (string is the same for different prefix (except the prefix)) ────────────────────────────────────────────────────────────────────────────────────────────────────────
wrong value
got: "ath-17bb-rs"
want: "th-17bb-rs"
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
❯ terraform test
─── Failed: fixed_prefix.fixed_prefix_will_not_be_used_for_hash_calculation.same_string_for_different_prefix (string is the same for different prefix (except the prefix)) ────────────────────────────────────────────────────────────────────────────────────────────────────────
wrong value
got: "ath-17bb-rs"
want: "th-17bb-rs"
─── Failed: fixed_prefix.fixed_prefix_length_will_be_considered.max_length (length is 11 characters, because fixed prefix could be one character more than given) ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────
condition failed
─── Failed: fixed_prefix.fixed_prefix_length_will_be_considered.shortened_string_prefix (string has the correct prefix) ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
wrong value
got: "b"
want: "a"
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
❯ terraform test
Success! All of the test assertions passed.
I just started the tests three times without changing any kind of input or doing any changes to the module or test files.
I've got two test suites, but same issue was happening having everything together in just one test suite as well.
In these test suites I've got in total 5 test_assertions
where I intentionally configured some tests to fail. So the "condition failed" messages are correct, but should be there as an output for each test run.
Idea: I noticed that I used the same value for component
in different test_assertions
. Making the names unique seems to provide stable results (with different order of the failed conditions, but imho this should be fine). Anyway, I would expect to always have a stable output and in case I misconfigured something the test suite should fail as well with a reasonable error message (like "test_assertions x and y have the same component value").
References
Haven't found some.
Hi @chris922, thanks for reporting this. I'm able to reproduce it with the following configuration.
main.tf:
output "foo" {
value = "bar"
}
tests/foo/test_foo.tf:
terraform {
required_providers {
test = {
source = "terraform.io/builtin/test"
}
}
}
module "main" {
source = "../.."
}
resource "test_assertions" "length" {
component = "foo"
equal "length" {
description = "length is 5"
got = length(module.main.foo)
want = 5
}
}
resource "test_assertions" "vowels" {
component = "foo"
check "vowels" {
description = "has no vowels"
condition = can(regex("^[^AEIOUaeiou]+$", module.main.foo))
}
}
These should both consistently fail, but I'm seeing missing descriptions, random reordering, and occasionally dropped failures. Sample output (warnings elided):
$ terraform test
─── Failed: foo.foo.length (length is 5) ──────────────────────────────────────
wrong value
got: 3
want: 5
───────────────────────────────────────────────────────────────────────────────
$ terraform test
─── Failed: foo.foo.length (length is 5) ──────────────────────────────────────
wrong value
got: 3
want: 5
─── Failed: foo.foo.vowels () ─────────────────────────────────────────────────
condition failed
───────────────────────────────────────────────────────────────────────────────
$ terraform test
─── Failed: foo.foo.length () ─────────────────────────────────────────────────
wrong value
got: 3
want: 5
─── Failed: foo.foo.vowels (has no vowels) ────────────────────────────────────
condition failed
───────────────────────────────────────────────────────────────────────────────
Hi @chris922! Thanks for reporting this.
Unfortunately this behavior is a consequence of the temporary solution of using a special built-in Terraform provider as a substitute for first-class language features to declare test assertions. The special "test" provider uses the component string in order to correlate assertions that exist during the planning phase with those same assertions at the apply phase, and so I expect that by making them not unique it's causing Terraform to mix them up and compare the final results with the wrong intances of planned assertions.
There's actually a note about this very thing in the implementation today:
https://github.com/hashicorp/terraform/blob/2ecdf44918a6cbb4668b12c382cff39c649041bc/internal/moduletest/provider.go#L270-L288
This reflects that I had initially tried to put an error message in for this case but found that it wasn't possible to do so due to the design of Terraform's provider lifecycle. The idea of a Terraform provider isn't really intended for this sort of state-keeping between plan and apply, and so I had to make some compromises as a result. One of those compromises was requiring a unique key to identify each assertion.
I expect that the solution to this issue will come in the form of replacing the temporary workaround of using a provider with some real language syntax that Terraform Core is actually aware of and can treat in a special way. I don't think it's worth trying to redesign the temporary workaround to behave differently, since the purpose of the current experiment was to learn what sorts of tests the current approach allows and doesn't allow, rather than to evaluate the usability of the temporary syntax that we ultimately intend to replace anyway.
With that said, I do appreciate you reporting this but I also don't expect we'll take any direct action to address this bug. Instead, you'll need to make sure that your test cases all have unique component names for now, and then in the next iteration of the experiment we'll switch to a design that can either enforce uniqueness at the language decoder level or can switch to a different design that doesn't require uniqueness at all, depending on the outcome of the next design phase. Thanks!
Hi @apparentlymart , thank you very much for your detailed response. I totally agree that it doesn't make sense to fix this if you plan to move away from this "test provider" solution that is in place right now.
Maybe the documentation here can be updated to clearly state that not using a unique component name can lead to such errors? I had a look at this page when I was searching for what I did wrong and it took some time until I realized I used the same component value twice.
I'm closing this issue, as we have now released a redesigned terraform test
command. Please try it out, and file a new issue if this behaviour has persisted between the alpha and released versions. Thanks!
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.