pants
pants copied to clipboard
Terraform validate not applied with check goal
Describe the bug A clear and concise description of the bug. when running the check goal on terraform modules, the command executes successfully (with exit code 0) but does not perform any actual validation. Additionally, no logs are shown even at debug log level. Version v2.17.0 works as expected.
Pants version Which version of Pants are you using? v2.18.0
OS Are you encountering the bug on MacOS, Linux, or both? MacOS
Additional info Add any other information about the problem here, such as attachments or links to gists, if relevant. This test repository can be used to reproduce the problem. Goals fmt and lint work as expected, check on the other hand not.
@lilatomic this appears to be a regression from 2.17 to 2.18, which suggests it might be related to https://github.com/pantsbuild/pants/pull/18974 . Do you happen to have insight into what's going on here?
ah, right, this is my bad for not writing up a doc on Terraform.
TLDR: you need to add a terraform_deployment
target because terraform validate
can only run safely against "root" modules (modules designed to be deployed)
terraform_deployment(name="deployment", root_module="terraform/modules/test:test")
The change is https://github.com/pantsbuild/pants/pull/19185 , comment https://github.com/pantsbuild/pants/pull/19185#issuecomment-1566534258 .
The challenge is something upstream in Terraform itself https://github.com/hashicorp/terraform/issues/28490. It is possible to lint any Terraform file, so we can lint all terraform_module
s. But terraform validate
cannot be run on just any collection of Terraform files (not even complete modules), because it requires providers to be present. But providers are only guaranteed to be present on a "root" module, leading to the situation in that issue.
I know this isn't ideal and is definitely unexpected. I thought this was the best position to take, since it won't cause errors caused by Terraform's behaviour. I've also found this to be a reasonable position with my own work, where all modules are consumed by deployments and check
ing modules was causing errors. I'm open to suggestions on what would be better! Would it be better to attempt to check all modules and offer an opt-out toggle? Would a doc explaining this behaviour be sufficient?
It's entirely on me that the doc on the Terraform backend and porting instructions isn't written. I'll have time tomorrow to write it up.
Hi @lilatomic, thanks for the response. Thanks for putting some (needed) work on the terraform backend, even without the docs :)
On the overall approach, doesn't marking a module as deployment imply the risk of actually deploying it with experimental-deploy? We'd need a skip_deploy flag to mitigate this. Alternatively I'd go with a skip_check flag in the terraform_module itself and let users decide whether a module can be validated or not.
As a sidenote, adding the line you suggest actually caused an error
09:39:37.89 [ERROR] 1 Exception encountered:
Engine traceback:
in `check` goal
OptionsError: You must explicitly specify the default Python interpreter versions your code is intended to run against.
You specify these interpreter constraints using the `interpreter_constraints` option in the `[python]` section of pants.toml.
We recommend constraining to a single interpreter minor version if you can, e.g., `interpreter_constraints = ['==3.11.*']`, or at least a small number of interpreter minor versions, e.g., `interpreter_constraints = ['>=3.10,<3.12']`.
Individual targets can override these default interpreter constraints, if different parts of your codebase run against different python interpreter versions in a single repo.
See https://www.pantsbuild.org/v2.18/docs/python-interpreter-compatibility for details.
I'm not using a python backend in this example, so this is rather confusing IMHO. Adding the config solves the issue, perhaps worth mentioning in the docs?
I think the interpreter constraints is an internal thing Pants needs to run the dependency parser. We can definitely call that out in the docs.
Yeah, you're right that adding a deployment would make it liable to being deployed. And terraform_deployment(..., skip_deploy=True)
seems a little silly to me. Adding a skip_terraform_validate
field is on the todo list anyhow and simple enough to add; and it gives maximum control to the end user.
Can you also talk a bit about how you'd like to use the Terraform backend? At work I've got modules and deployments in the same repo, and use multiple aliased providers. For me, validate
ing modules was a bug, and validate
ing on a deployment is the correct solution from that thread. Do you have modules that you publish to a registry?
Sort of. We have one repo that holds the terraform modules and another that generates tfvars for these modules, we basically generate instances of the "root" modules dynamically. So development and deployment of said modules happen in separate repositories, we won't use pants for deployment at least for the foreseeable future.
Thanks for explaining, I'll keep that in mind as I'm developing. I've got an MR up that will validate
modules again, adds the skip_terraform_validate
flag, and also adds some documentation. I'd appreciate if you could review it.
as far as my limited knowledge of pants internals goes, it looks good to me! Thanks for the quick fix.
btw, it is often useful to determine the order root modules are applied, I think it should be possible to use pants to our advantage for this (e.g. with terraform_deployment dependencies?).
Sorry, I missed the last query there. I don't think Pants currently has a way to set the deployment order of deployable targets. I think using the dependencies
field here doesn't work because the dependency is placed on the files of the terraform_deployment
, not on the result of the deployment.
It is possible to implement that, though, just needs to get done.
Hey, we've solved this issue in mainline now, so I'm going to close this.