amazon-redshift-utils
amazon-redshift-utils copied to clipboard
[Question] Not providing the AWS access key
Hello,
I am trying to use the unload-copy
module and I can see that besides CONFIG_FILE
and AWS_REGION
, also AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
are needed. I assume the access key is needed to access KMS for decrypting the secrets, however, I can see that if I do not provide them then we have a problem here:
https://github.com/awslabs/amazon-redshift-utils/blob/ba5bcb588fa3d684e663aea0f2808b7e5c29dd81/src/UnloadCopyUtility/global_config.py#L154-L155
The thrown exception is due the fact that boto3
needs these parameters but my question is why is EC2 client needed? Is this for checking the region only?
I am asking these because my use case is to use this utility in an ECS task to migrate some data between 2 Redshift databases. I need to inject the secrets through environment variables from SSM, hence they are in plaintext in the container. I tried to generate the CONFIG_FILE
internally and further provide it to bin/run-unload-copy-utility.sh
. In this case there is no reason for storing the secrets encrypted in the config file since this is handled by SSM. Therefore I do not need the IAM access key but I am afraid that the current implementation forces me to do the opposite. Is there a way to use the utility as I specified - without the access key?
Thanks in advance!
The ec2_client is just a way of getting a list of all the available regions by connecting to an AWS endpoint. Otherwise using local boto3 files doesnt guarantee the the list will be up to date.
Is there any reason you dont want to store the encrypted secrets in the config file? That seems like the most straightforward solution. You need to always have either the access key and secret key, or use the config file with encrypted values.