taskcat
taskcat copied to clipboard
CFNTest doesn't delete taskcat auto bucket where it uploads the project files and templates to
Describe the bug
When I run taskcat test run it successfully deletes the auto bucket it creates for the project files and templates
When I run pytest using CFNTest the test successfully passes and all stacks it creates are deleted but the auto bucket for some reason doesn't get deleted not sure why the clean_up method looks like it has code to delete the the buckets see https://github.com/aws-quickstart/taskcat/blob/main/taskcat/testing/_cfn_test.py#L158 but for some reason the bicket with all the project files and templates is still there ?
To Reproduce Steps to reproduce the behavior:
-
aws cloudformation package cloudformation/tests/firehose.yaml > cloudformation/tests/firehose-packaged.yaml
-
pytest -k test_firehose
-
observe everything works and is deleted except the auto bucket created to upload the project files and templates too
-
Are you testing a QuickStart or Custom template? Not sure I am guessing quickstart ?
-
Attach or link a copy of the template if possible (remove any sensitive info)
.taskcat.yaml
project:
name: cf-rest-event-box-test
regions:
- us-east-1
build_submodules: false
package_lambda: false
tests:
firehose:
parameters:
FireHosePolicyName: $[taskcat_autobucket]
template: ./cloudformation/tests/firehose-packaged.yaml
./cloudformation/tests/firehose-packaged.yaml
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
FireHosePolicyName:
Type: String
Description: The Name to give to the firehose policy
AllowedPattern: ^[\w+=,.@-]+$
MinLength: 1
MaxLength: 128
Resources:
KinesisStream:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://s3.us-east-1.amazonaws.com/${MYBUCKET}/tests-firehose/0866c4ffaa8d213bd354493af7439176.template
S3Bucket:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://s3.us-east-1.amazonaws.com/${MYBUCKET}/tests-firehose/7ad3f4ac80a274c0a8537cf1dfd3b236.template
FirehoseRole:
Type: AWS::CloudFormation::Stack
Properties:
Parameters:
KinesisStreamModule:
Fn::GetAtt:
- KinesisStream
- Outputs.StackName
S3BucketModule:
Fn::GetAtt:
- S3Bucket
- Outputs.StackName
FireHosePolicyName:
Ref: FireHosePolicyName
TemplateURL: https://s3.us-east-1.amazonaws.com/${MYBUCKET}/tests-firehose/3d5f42ebf4cc6402254384293d58f0fe.template
Firehose:
Type: AWS::CloudFormation::Stack
Properties:
Parameters:
KinesisStreamModule:
Fn::GetAtt:
- KinesisStream
- Outputs.StackName
S3BucketModule:
Fn::GetAtt:
- S3Bucket
- Outputs.StackName
StreamType: private
FirehoseRoleArn:
Fn::GetAtt:
- FirehoseRole
- Outputs.Arn
TemplateURL: https://s3.us-east-1.amazonaws.com/${MYBUCKET}/tests-firehose/b588143b451d8b1b9482732b295ca2bd.template
Outputs:
StreamName:
Value:
Fn::GetAtt:
- KinesisStream
- Outputs.Name
Export:
Name:
Fn::Sub: ${AWS::StackName}-StreamName
BucketName:
Value:
Fn::GetAtt:
- S3Bucket
- Outputs.Name
Export:
Name:
Fn::Sub: ${AWS::StackName}-BucketName
test_firehose.py
from taskcat.testing import CFNTest
from .utils import get_config, find_output
import boto3
import time
import json
import uuid
import pytest
def test_firehose():
test = CFNTest(config=get_config(), regions="us-east-1", test_names="firehose")
with test as stacks:
bucket_name = find_output("BucketName", stacks)
s3 = boto3.resource("s3")
bucket = s3.Bucket(bucket_name)
assert not list(bucket.objects.all())
record = {"foo": "bar", "bar": "foo"}
stream_name = find_output("StreamName", stacks)
kinesis = boto3.client("kinesis")
kinesis.put_record(
StreamName=stream_name,
Data=json.dumps(record).encode("utf-8"),
PartitionKey=str(uuid.uuid4()),
)
time.sleep(450)
assert list(bucket.objects.all())
bucket.objects.all().delete()
utils.py
rom taskcat import Config
from pathlib import Path
import boto3
def get_config():
project_root_path = Path(__file__).parent / "../.."
input_file_path = project_root_path / ".taskcat.yml"
config = Config.create(
project_root=project_root_path,
project_config_path=input_file_path,
)
return config
def find_output(output_key, stacks):
for stack in stacks:
for output in stack.outputs:
if output.key == output_key:
return output.value
-
Provide the parameters that you passed. (remove any sensitive info)
-
How did you install taskcat? (docker or pip3) pip3
-
Are you using a profile, an instance role or access keys to run taskcat? profile
-
Is your AWS environment configured via
aws configure? Yes ~/.aws/config
Expected behavior That the autobucket created to upload project files and templates to is deleted
Screenshots If applicable, add screenshots to help explain your problem.
**Version (Please make sure you are running the latest version of taskcat)
- Taskcat Version (ex: [2018.817.210357])
Note: Python Version (python3 required) 3.7.7
To find versions:
Via taskcat: taskcat -V 0.9.23
Via pip3: pip3 show taskcat 0.9.23
Note: both version should match
To update taskcat run:
for docker : docker pull taskcat/taskcat
for pip3: pip3 install --upgrade taskcat
Additional context Add any other context about the problem here.
Yeah, I confirm this for taskcat v.0.9.23. I was debugging the code and it's crazy because the CFNTest.clean_up() method invokes the Config.get_buckets() method that creates a new project bucket! That new bucket is the one that is then deleted.
Law of unintended consequences. I'll look at this next week.
Just in case you cannot wait for @andrew-glenn's fix, I slightly modified the file _config.py to "fix it up" (be careful, I have only tested it in a few cases).
diff --git a/taskcat/_config.py b/taskcat/_config.py
index bdcb0e5..93a4c50 100644
--- a/taskcat/_config.py
+++ b/taskcat/_config.py
@@ -282,8 +282,7 @@ class Config:
name = bucket_objects[region.account_id].name
auto_generated = bucket_objects[region.account_id].auto_generated
else:
- name = test.s3_bucket
- auto_generated = False
+ return test.s3_bucket
bucket_region = self._get_bucket_region_for_partition(region.partition)
bucket_obj = S3BucketObj(
name=name,
@@ -299,6 +298,7 @@ class Config:
)
if new:
bucket_obj.create()
+ test.s3_bucket = bucket_obj
return bucket_obj
def _create_regional_bucket_obj(self, bucket_objects, region, test):
@@ -341,6 +341,7 @@ class Config:
)
if new:
bucket_obj.create()
+ test.s3_bucket = bucket_obj
return bucket_obj
@staticmethod
The previous changeset will fail if you set the config parameter s3_bucket to reuse a project bucket. In that case, the test.s3_bucket will have to be converted to a S3BucketObj object. Next changeset also handles that case.
diff --git a/taskcat/_config.py b/taskcat/_config.py
index bdcb0e5..5f48643 100644
--- a/taskcat/_config.py
+++ b/taskcat/_config.py
@@ -281,9 +281,11 @@ class Config:
elif bucket_objects.get(region.account_id):
name = bucket_objects[region.account_id].name
auto_generated = bucket_objects[region.account_id].auto_generated
- else:
+ elif isinstance(test.s3_bucket, str):
name = test.s3_bucket
auto_generated = False
+ else:
+ return test.s3_bucket
bucket_region = self._get_bucket_region_for_partition(region.partition)
bucket_obj = S3BucketObj(
name=name,
@@ -299,6 +301,7 @@ class Config:
)
if new:
bucket_obj.create()
+ test.s3_bucket = bucket_obj
return bucket_obj
def _create_regional_bucket_obj(self, bucket_objects, region, test):
@@ -341,6 +344,7 @@ class Config:
)
if new:
bucket_obj.create()
+ test.s3_bucket = bucket_obj
return bucket_obj
@staticmethod
Cheers!
thanks for looking at this currently I use a pytest session fixture to empty and delete the bucket after I am done. So an easy enough workaround for now
@tjtaill The CLI taskcat test run uses CFNTest under the hood. I looked at how you created your config in get_config() and compared it to how CFNTest does it.
The difference is that you are not passing the build args.
args = _build_args(enable_sig_v2, regions, GLOBAL_ARGS.profile)
config = Config.create(
project_root=project_root_path,
project_config_path=input_file_path,
args=args
# TODO: detect if input file is taskcat config or CloudFormation template
)
I'm not sure what the build args do and why they make it so that auto bucket is not deleted but it might have something to do with setting default values.
Another way to do what you are doing is:
@pytest.fixture(scope='session')
def template_root()
return Path(__file__).parent / "../.."
def test_firehose(template_root):
input_file_path = template_root / ".taskcat.yml"
test = CFNTest.from_file(project_root=template_root, input_file=input_file_path,regions='us-east-1')
test.test_names = 'firehose'
with test as stacks:
bucket_name = find_output("BucketName", stacks)
...
I'm guessing the from_file wasn't documented well enough. It was here https://aws-quickstart.github.io/taskcat/apidocs/taskcat/ but it seems like it has been removed.
If you are not planning on using the taskcat CLI then you will probably find https://github.com/DontShaveTheYak/cloud-radar a better fit.
from pathlib import Path
from cloud_radar.cf.e2e import Stack
# Stack is a context manager that makes sure your stacks are deleted after testing.
template_path = Path("tests/templates/log_bucket/log_bucket.yaml")
params = {"BucketPrefix": "testing", "KeepBucket": "False"}
regions = ['us-west-2']
# template_path can be a string or a Path object.
# params can be optional if all your template params have default values
# regions can be optional, default region is 'us-east-1'
with Stack(template_path, params, regions) as stacks:
# Stacks will be created and returned as a list in the stacks variable.
for stack in stacks:
# stack will be an instance of Taskcat's Stack class.
# It has all the expected properties like parameters, outputs and resources
print(f"Testing {stack.name}")
...
This is where the original CFNTest came from. It also has some other capabilities like unit testing (testing without deploying) but that feature is only partially implemented.