matano
matano copied to clipboard
matano init should create a unique resource identier
When running multiple inits (assuming a new directory) is used I would expect a new unique identifier would be created each time
CDKToolkit | 0/12 | 9:06:46 AM | CREATE_FAILED | AWS::S3::Bucket | StagingBucket cdk-hnb659fds-assets-XXXXXX-us-east-2 already exists
I ran init twice and cdk-hnb659fds
seems to be re-used, i would expect this to be unique each run, but maybe this is a constraint of CDK
When you have multiple repeated failures to deploy this makes cleanup difficult. I would also expect each deployment to have unique roles.
{
"version": "20.0.0",
"files": {
"70f03c831095bf0345af1dac68037dcb2b95a9fe0c4b4d27738cfad55da1c8c7": {
"source": {
"path": "DPCommonStack.template.json",
"packaging": "file"
},
"destinations": {
"647303185053-us-east-2": {
"bucketName": "cdk-hnb659fds-assets-XXXXX-us-east-2",
"objectKey": "70f03c831095bf0345af1dac68037dcb2b95a9fe0c4b4d27738cfad55da1c8c7.json",
"region": "us-east-2",
"assumeRoleArn": "arn:${AWS::Partition}:iam::XXX:role/cdk-hnb659fds-file-publishing-role-XXX-us-east-2"
}
}
}
},
"dockerImages": {}
}mfranz@pixel-slate-cros:~$ cat /tmp/matanocdkoutonT9xy/DPCommonStack.assets.json
{
"version": "20.0.0",
"files": {
"70f03c831095bf0345af1dac68037dcb2b95a9fe0c4b4d27738cfad55da1c8c7": {
"source": {
"path": "DPCommonStack.template.json",
"packaging": "file"
},
"destinations": {
"647303185053-us-east-2": {
"bucketName": "cdk-hnb659fds-assets-XXXX-us-east-2",
"objectKey": "70f03c831095bf0345af1dac68037dcb2b95a9fe0c4b4d27738cfad55da1c8c7.json",
"region": "us-east-2",
"assumeRoleArn": "arn:${AWS::Partition}:iam::XXX:role/cdk-hnb659fds-file-publishing-role-XXX-us-east-2"
}
}
}
},
"dockerImages": {}
Multiple Matano deployments within the same region + account is currently not supported.
We can use this issue to track that if it is a feature request, and feel free to describe why this would be useful.
If it is mostly for helping with cleanups, we can solve that with a simpler fix (is_production: false
option for the matano.config.yml
to disable aggressive retain-on-delete policies specified on stateful resources, a matano destory
command, etc.)
Cool, will try the config changes to make life easier during testing. More "principle of least suprise" that if I have a expected a different config directory that word correspond to a different backend infrastructure. I guess it is also reasonable that multiple matano deployments would be in different AWS accounts or regions so there would not be collision.
A matano destroy command would be great. Currently it’s a pain to clean up SQS queues and S3 buckets when experimenting.
The matano destroy
command was implemented in PR #34 a few days ago. Could y'all test it out and let us know if that resolves the issue?