Drift Detection false positive for KmsKeyId property under AWS::Redshift::Cluster resource
Name of the resource
AWS::Redshift::Cluster
Resource Name
No response
Issue Description
The KmsKeyId property for the AWS::Redshift::Cluster resource is marked as drifted when just the Id (not full ARN) is specified in the template and the stack is created.
Ideally, drift detection should not have been performed on the KmsKeyId property as per - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift.html#drift-considerations
CloudFormation does not perform drift detection on the KMSKeyId property of any resources. Because AWS KMS keys can be referenced by multiple aliases, CloudFormation can't guarantee consistently accurate drift results for this property.
Expected Behavior
The KmsKeyId property should be IN_SYNC or NOT_CHECKED status.
Observed Behavior
The KmsKeyId property is marked as NOT_EQUAL with
expected = xxxx-xxxx-xxxx-xxxx
actual = arn:aws:kms:us-east-1:123465798012:key/xxxx-xxxx-xxxx-xxxx
Test Cases
Steps to reproduce -
- Create Stack using following template -
Resources:
myCluster:
Type: 'AWS::Redshift::Cluster'
Properties:
DBName: mydb
Encrypted: true
MasterUsername: master
MasterUserPassword: xxxxxxxxxx
NodeType: ds2.xlarge
ClusterType: single-node
KmsKeyId: xxxx-xxxx-xxxx-xxxx
- Run drift detection
- View drift results
Other Details
No response
Seems related to:
Ideally, drift detection should not have been performed on the KmsKeyId property as per - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift.html#drift-considerations
drift detection shows as NOT_EQUAL. During drift detection, we run a describe over the cluster and compare with current template.
https://i.amazon.com/issues/CFN-44852 to ignore the kmskeyid property
Hi @shwetayakkali - Thanks a lot for the provided information. Unfortunately, we still see a drift reported if we set KmsKeyId property. Having this said, the property seems not to be ignored (tested in eu-central-1).
Could you please re-open this issue here? (or should I create a new issue with reference to this?)
Confirmed. Issue still persists, recommend reopening ticket for further consideration.
how do we add this property to be ignored for drift detection? Since, it should be considered for drift given as per :
CloudFormation does not perform drift detection on the KMSKeyId property of any resources. Because AWS KMS keys can be referenced by multiple aliases, CloudFormation can't guarantee consistently accurate drift results for this property.
Ref link : https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift.html#drift-considerations
Just retested, confirmed issue still relevant. Recommend re-opening this ticket
key = kms.Key(self, "MyKey",
removal_policy=RemovalPolicy.DESTROY,
)
cluster = CfnCluster(self, 'Cluster',
cluster_type='single-node',
db_name='dev',
master_username='bevelvoerder',
master_user_password='Wagw00rdEen',
node_type='dc2.large',
encrypted=True,
kms_key_id=key.key_id,
classic=True,
)
cluster.apply_removal_policy(RemovalPolicy.DESTROY)
Similar issue as https://github.com/aws-cloudformation/cloudformation-coverage-roadmap/issues/1204
@prerna-p @aygold92 @kanitkah - Could you please re-open this issue in order to reflect the current state of this issue? - Once our teams hitting a drift and they assume a false-postive, they checking this github repo here, if it's a known issue. If it's closed, it's not found by the teams. In case the issue is closed by error, it's blocking us or generate extra efforts to debug into this drift (even it's not necessary). Thanks for understanding.
PS: We are aware that Harshu & Team working on a general solution for this KMS-Alias drift false-positive. Once this option is available, closing this case would be valid.
Cheers, Robert
@rgoltz Can you confirm if you are seeing this issue on new stacks as well as existing stacks? We are working on the KMS Alias drift false positive, but the above issue is not about alias, just the full ARN and key, right? expected = xxxx-xxxx-xxxx-xxxx actual = arn:aws:kms:us-east-1:123465798012:key/xxxx-xxxx-xxxx-xxxx
@prerna-p - Could you please re-open this issue as said by Harshu?
Retested and confirmed.
CDK v2.76.0
key = kms.Key(self, "MyKey",
removal_policy=RemovalPolicy.DESTROY,
)
cluster = CfnCluster(self, 'ClusterMitKmsId',
cluster_type='single-node',
db_name='dev',
master_username='bevelvoerder',
master_user_password='Wagw00rdEen',
node_type='dc2.large',
encrypted=True,
kms_key_id=key.key_id,
classic=True,
)
cluster.apply_removal_policy(RemovalPolicy.DESTROY)
Expected
{
"KmsKeyId": "xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx",
"Encrypted": true,
"NodeType": "dc2.large",
"MasterUsername": "bevelvoerder",
"DBName": "dev",
"ClusterType": "single-node",
"Classic": true
}
Actual
{
"KmsKeyId": "arn:aws:kms:eu-central-1:00000000000:key/xxxxxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"Encrypted": true,
"NodeType": "dc2.large",
"MasterUsername": "bevelvoerder",
"DBName": "dev",
"ClusterType": "single-node"
}
updated on https://github.com/aws-cloudformation/aws-cloudformation-resource-providers-redshift/issues/132 and https://github.com/aws-cloudformation/aws-cloudformation-resource-providers-redshift/issues/131
False-positive drift is also reported when using the key's alias
alias = 'c-sharp'
key = kms.Key(self, "MyKey",
alias=alias,
removal_policy=RemovalPolicy.DESTROY,
)
alias_arn = "arn:aws:kms:{}:{}:alias/{}".format(self.region, self.account, alias)
cluster = CfnCluster(self, 'ClusterMitKmsAlias',
cluster_type='single-node',
db_name='dev',
master_username='bevelvoerder',
master_user_password='Wagw00rdEen',
node_type='dc2.large',
encrypted=True,
kms_key_id=alias_arn,
classic=True,
)
cluster.apply_removal_policy(RemovalPolicy.DESTROY)
Expected
{
"KmsKeyId": "arn:aws:kms:eu-central-1:0000000000000:alias/c-sharp",
"Encrypted": true,
"NodeType": "dc2.large",
"MasterUsername": "bevelvoerder",
"DBName": "dev",
"ClusterType": "single-node",
"Classic": true
}
Actual
{
"KmsKeyId": "arn:aws:kms:eu-central-1:0000000000:key/xxxxxx-xxxxx-xxxx-xxxx-xxxxxx",
"Encrypted": true,
"NodeType": "dc2.large",
"MasterUsername": "bevelvoerder",
"DBName": "dev",
"ClusterType": "single-node"
}