aws-deployment-framework icon indicating copy to clipboard operation
aws-deployment-framework copied to clipboard

Allow override of the TemplateConfiguration for the CloudFormation provider

Open Nr18 opened this issue 2 years ago • 4 comments

When using the ServiceCatalog as a deployment provider you vane the ability to supply a configuration_file_path. When you use the generate_params.py functionality with a single template in the pipeline you will not have any issues.

But when you have multiple templates that you would like to deploy as part as a single deployment and the CloudFormation Paramaters of those stacks are not equal you will run into issues.

By supporting the configuration_file_path for a CloudFormation deployment target you would make it possible to specify specific configurations for these "other" stacks.

A use case for this is combining a delegation role in the master account and the implementation of the service in another account. By keeping them in a single pipeline and repo you make it easy to maintain and the execution order is guaranteed.

Nr18 avatar Mar 09 '22 13:03 Nr18

If I get it right, you are using the CloudFormation provider, and you want to have multiple stacks within the same repository.

Did you consider using the Mono Repo structure, as explained here? https://github.com/awslabs/aws-deployment-framework/tree/0723ddf4eaf55888ae780dc48873f0ec4766cfbd/samples/sample-mono-repo

The root_dir property allows you to specify which stack you are deploying. Storing each stack in a separate directory. This way can use one pipeline per stack, or combine all in a single stack.

If you combine all in a single pipeline, you would change directories into each stack, run the generate_params.py script and go to the next directory. For the deployment targets, you can set the root_dir per target.

Would this resolve your issue?

sbkok avatar Mar 11 '22 09:03 sbkok

Hi @sbkok,

Yes I did had a look at the monorepo example, but that requires 2 pipelines and then you cannot pass the output of the first CloudFormation stack to the next.

I tried setting the root_dir and executing the generate_params.py but this failed. I will reproduce it and post the details, if I remember correctly it had import issues when executed. So I figured it was not a supported scenario. But from your reaction I assume it should work? In that case I might have found a bug then.

I will come back to this!

Nr18 avatar Mar 11 '22 09:03 Nr18

[Container] 2022/03/11 11:55:16 Running command cd ./subscription
[Container] 2022/03/11 11:55:16 Running command python ../adf-build/generate_params.py
Traceback (most recent call last):
File "../adf-build/generate_params.py", line 17, in <module>
from resolver import Resolver
File "/codebuild/output/src140950311/src/adf-build/resolver.py", line 11, in <module>
from s3 import S3
ModuleNotFoundError: No module named 's3'
 
[Container] 2022/03/11 11:55:16 Command did not exit successfully python ../adf-build/generate_params.py exit status 1
[Container] 2022/03/11 11:55:16 Phase complete: BUILD State: FAILED
[Container] 2022/03/11 11:55:16 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: python ../adf-build/generate_params.py. Reason: exit status 1

Ah yes, that was it indeed. because the cwd has changed the s3 module could not be found. The following command:

PYTHONPATH=../adf-build/python python ../adf-build/generate_params.py

Would solve that.... 🤔 What approach do you prefer here @sbkok? The PYTHONPATH path is set on project level. We need a absolute path in order to get this working...

Nr18 avatar Mar 11 '22 12:03 Nr18

One solution would be replacing/appending the ${LAMBDA_TASK_ROOT}/adf-build/python to the PYTHONPATH. This will then be part of the generate_params.py script. Would there be any concern doing this? If not I am happy to change the PR into that + the needed documentation.

Nr18 avatar Mar 16 '22 11:03 Nr18