cli
cli copied to clipboard
metadata attributes do not appear to be working?
Description : I'm trying to detect whether a gcs bucket is created by a module. I have this very simple use case:
Feature: Ensure GCS bucket is created by module
Scenario: Reject any bucket not created via module
Given I have google_storage_bucket defined
When its address metadata does not contain "module"
Then it fails
I get the following output:
🚩 Running tests. 🎉
Feature: Ensure GCS bucket is created by module # /Users/dan/Documents/Projects/tmp/terraform-compliance-testing/bucket-module-source.feature
Scenario: Reject any bucket not created via module
Given I have google_storage_bucket defined
When its address metadata does not contain "module"
Failure: Forcefully failing the scenario on google_storage_bucket (google_storage_bucket.lh-testbucket, module.gcs_bucket_lh_development1_cloudsql.google_storage_bucket.this, module.gcs_bucket_loveholidays_helm_charts_dev.google_storage_bucket.this) resource
Then it fails
Failure:
1 features (0 passed, 1 failed)
1 scenarios (0 passed, 1 failed)
3 steps (2 passed, 1 failed)
Using the debug steps, I can see the three buckets (two from a module, one not) in the stash, each with address (I've also tried isolating on the ones that contain module_address
with no luck). This check should fail only the one resource that isn't from a module.
Am I doing something wrong, or is it broken? I've not managed to use any attributes outside of values.
Hey @dwilliams782
It looks like you are trying to enforce module use, similar to how I am. Take a look at my issue and feature request here:
456 @eerkunt This is another usecase for my feature request. Just an fyi
Unfortunately, metadata filtering doesn't allow us to test for the presence or absence of a a key/value pair within terraform-compliance. If you actually dig into the plan json you will see that module_address only exists within resources that are created with a module.
Also metadata is not yet regex searchable, so the when statement you have:
When its address metadata does not contain "module"
Also we can't yet regex When
statements.
⬆️ Actually this is the symptom of the same thing, no regex for When
regardless of what When
is filtering. So you need a Then
to get some regex-y goodness!
Here is what I do for aws, leveraging the bdd reference below. Since resources that are not built using a module will not have the module.
anywhere you can qualify or disqualify using this where it will check the entire resource to see if module.
Given I have aws_s3_bucket defined
Then it must have "module." referenced
Since resources that are not built using a module will not have the module.
anywhere you can qualify or disqualify if they were built using a module or not. Let me know if this helps!
Hi @mdesmarest, thanks for responding.
Yes, your feature request is exactly the sort of thing we are trying to workaround here.
Understood about When
not supporting regex.
Using your example:
Feature: Ensure GCS bucket is created by module
Scenario: Reject any bucket not created via module
Given I have google_storage_bucket defined
Then it must have "module." referenced
It still fails incorrectly (only one resource should fail):
Feature: Ensure GCS bucket is created by module # /Users/dan/Documents/Projects/tmp/terraform-compliance-testing/bucket-module-source.feature
Scenario: Reject any bucket not created via module
Given I have google_storage_bucket defined
Failure: module. is not referenced within google_storage_bucket.testbucket.
Failure: module. is not referenced within module.gcs_bucket__cloudsql.google_storage_bucket.this.
Failure: module. is not referenced within module.gcs_bucket_helm_charts_dev.google_storage_bucket.this.
Then it must have "module." referenced
Failure:
1 features (0 passed, 1 failed)
1 scenarios (0 passed, 1 failed)
2 steps (1 passed, 1 failed)
Run 1620223739 finished within
I wonder if the plan is different between AWS and GCP providers?
@dwilliams782 can you show me your stash, redact or replace info with junk. and or take a look at the json. Im not a google cloud guy, but HCL is HCL as far as terraform is concerned. most times I work through the stash, sometimes the logic here can get counter intuitive as some of the features I have written can represent pivots.
Essentially since you are not able to enforce against a particular module yet, what I do is set up a feature that tests against what you would expect from a resource that is created using the module you desire and then flag on outliers. Missing parameters ETC.
I set up a single feature with several different scenarios within it, so that the only way to pass the feature is to pass all the checks for each of the args that the module creates. Hope that makes sense.
Its less testing for the module use and more, here is a correctly configured resource
make sure all checks pass for this, thus failing on antipatterns.
Given I have google_storage_bucket defined
>> Press enter to continues
[
{
"address": "google_storage_bucket.lh-testbucket",
"mode": "managed",
"type": "google_storage_bucket",
"name": "lh-testbucket",
"provider_name": "registry.terraform.io/hashicorp/google",
"values": {
"cors": [],
"default_event_based_hold": null,
"encryption": [],
"force_destroy": false,
"labels": null,
"lifecycle_rule": [],
"location": "EUROPE-WEST2",
"logging": [],
"name": "dan-testing-tfc",
"project": "<removed>",
"requester_pays": null,
"retention_policy": [],
"storage_class": "STANDARD",
"versioning": [],
"website": []
},
"actions": [
"create"
]
},
{
"address": "module.gcs_bucket_<removed>_cloudsql.google_storage_bucket.this",
"module_address": "module.gcs_bucket_<removed>_cloudsql",
"mode": "managed",
"type": "google_storage_bucket",
"name": "this",
"provider_name": "registry.terraform.io/hashicorp/google",
"values": {
"bucket_policy_only": false,
"cors": [],
"default_event_based_hold": false,
"encryption": [],
"force_destroy": false,
"id": "<removed>",
"labels": {
"environment": "development",
"module_source": "terraform-module-gcs",
"name": "<removed>",
"purpose": "cloudsql_syncronisation",
"team": "devops",
"terraform_managed": true
},
"lifecycle_rule": [],
"location": "EUROPE-WEST2",
"logging": [],
"name": "<removed>",
"project": "<removed>",
"requester_pays": false,
"retention_policy": [],
"self_link": "https://www.googleapis.com/storage/v1/b/<removed>",
"storage_class": "REGIONAL",
"uniform_bucket_level_access": false,
"url": "<removed>",
"versioning": [
{
"enabled": false
}
],
"website": []
},
"actions": [
"no-op"
]
},
{
"address": "module.<removed>.google_storage_bucket.this",
"module_address": "module.<removed>",
"mode": "managed",
"type": "google_storage_bucket",
"name": "this",
"provider_name": "registry.terraform.io/hashicorp/google",
"values": {
"bucket_policy_only": true,
"cors": [],
"default_event_based_hold": false,
"encryption": [],
"force_destroy": false,
"id": "<removed>",
"labels": {
"environment": "development",
"module_source": "terraform-module-gcs",
"name": "<removed>",
"purpose": "helm-charts-in-dev",
"team": "devops",
"terraform_managed": true
},
"lifecycle_rule": [],
"location": "EUROPE-WEST2",
"logging": [],
"name": "<removed>",
"project": "<removed>",
"requester_pays": false,
"retention_policy": [],
"self_link": "https://www.googleapis.com/storage/v1/b/<removed>",
"storage_class": "REGIONAL",
"uniform_bucket_level_access": true,
"url": "gs://<removed>",
"versioning": [
{
"enabled": false
}
],
"website": []
},
"actions": [
"no-op"
]
}
]
What you're saying makes total sense, but any attempt at referencing any of the metadata
values hasn't worked for me. I can reference anything inside values
fine.
Not sure why you are having an issue, it should be the same for both of us. What version of terraform-compliance are you running?
Also, can you reference the actions
metdata at all. I cant really tell what is occuring because I cant see your debug and kick it into ipython to see what your instep stuff is populating.
Ultimately the fact that it references module
is only one piece of your verification process here. Since all that does is confirm that the resource was created with a module
not necessarily THE module you wish. your best bet is to do as I have here for my aws s3 buckets.
Its a game of abductive
reasoning, or abDUCKtive reasoning as I like to call it. If i see a bird, i cant 100% confirm its a duck, but if it walks like a
and swims like a
and quacks like a
i can reasonably feel good about it being a duck. Its a matter of what is of higher concern, do you want to use the module to ensure all arguments are configured correctly, or do you really want to ensure this specific module is used? If its the former, the feature style I wrote above works, if its the latter, then we need to hope to get deeper metadata exposure.
The style to which you write your features depends on what you hope to accomplish. I cluster scenarios that fail to indicate a certain thing under one feature and a set of specific results. You can break these up into separate features as well.
Hope this helps. Def have the big brains here take a peek
Feature: s3 buckets must be created using the FUBARCORPMODEL module. Details: https://github.com/FUBARCORP/tfcompliance/all/s3/s3module.feature
Fixes: Resources may fail one or several checks. Any failures indicate disuse of the FUBARCORP s3 module, or the use of an unapproved s3 module
Module Alert: For all new s3 buckets, please use: https://github.com/FUBARCORP/tf-module-s3-bucket
# These checks are designed to pattern after what our s3 module produces. Failures on any or multiple scenario(s) indicate useage of a different module or
# use of vanilla s3 resource blocks. Compliance to this module is necessary to tie resources to future iterations of the module as they relate to
# our security standards.The 2u module is set to encrypt, replicate, verion and create a backup bucket.
# Verifies an s3 bucket being newly created
Background:Check for the presence of a new aws_s3_bucket
Given I have aws_s3_bucket defined
When its actions metadata has create
#verifies an s3 bucket is being created with a module
Scenario: Ensure a module is used on all new s3 buckets
Then it must have "module." referenced
#verifies the presence of server_side_encryption
Scenario: Ensure server_side_encryption enabled on all new s3 buckets
Then it must have server_side_encryption_configuration
#verifies encryption at rest is set to AES256
Scenario: Ensure that all new s3 buckets contain sse_algorithm and it must be set to AES256
When it has server_side_encryption_configuration
Then it must have sse_algorithm
And its value must be AES256
Scenario: Verify replication_configuration is setup on primary bucket and using the correct role
When it does not contain aws_s3_bucket
Then it must have replication_configuration
Then its role must be arn:aws:iam::9999999999:role/S3-Cross-Account-Backup
Scenario: Verify a backup bucket is created and mapped to the primary bucket using the correct role
When it does not contain replication_configuration
Then it must have aws_s3_bucket
Then it must have replication_configuration
Then its role must be arn:aws:iam::999999:role/S3-Cross-Account-Backup
Scenario: Verify versioning is setup on primary bucket and set to true
When it does not contain aws_s3_bucket
Then it must have versioning
Then its enabled must be true
Scenario: Verify versioning is setup on backup bucket and set to true
When it does not contain replication_configuration
Then it must have aws_s3_bucket
Then it must have versioning
Then its enabled must be true
Aside from the benefits of using a SPECIFIC module, for versioning purposes. There aren't many ways to VERIFY the module you wish to use until hopefully @eerkunt can incorporate the module_calls
metadata that lists the direct path to the module I am using.
The best bet is to take an ideal resource that is correct, create either a series of features, or several scenarios under a top level feature that will fail to guide the user to the appropriate module to use.