aws-cli
aws-cli copied to clipboard
awscli assume role creates tmpfile in $HOME/.aws, this breaks if $HOME/.aws is readonly mounted
If $HOME/.aws is read only mounted:
When using a profile that does an assume role like so:
[master]
....
[eu-staging]
role_arn= arn:aws:iam::REDACTED
source_profile=master
region = eu-west-1
The command:
aws --profile eu-staging ec2 describe-instances
Error:
[Errno 30] Read-only file system: '/home/spinnaker/.aws/cli'
Please address the tempfile requirement, as in kubernetes land we mount secrets as folders, and can't make them read-write
Can you give me some more details of what your setup is? I am not familiar with kubernetes at all. My concern is will be really inefficient as you'll need to do the assume role call before every single CLI operation. Would an environment variable/config option that allows you to disable caching be sufficient for your needs?
I have a similar issue. The home directory of a service account does not get mounted, so when the awscli tries to write the $HOME/.aws/cli/cache files I get the following error. [Errno 13] Permission denied: '/home/username'
I am using the environment variables to redirect the credentials and config files.
For me, an Environment variable to redirect the whole .aws folder to a new location would be a good solution.
@stealthycoin I've dug into the code, and the issue is caused by the caching mechanism of awscli/boto3. When we mount secrets/configmaps into kubernetes containers, such as the credentials/config to ~/.aws it is done as read-only (by k8s, and not configurable.).
running awscli sometimes creates a ~/.aws/.cache which fails due to read-only filesystem.
Sorry @stealthycoin , to answer your question: A flag that disables caching would solve this problem, yes.
@pieterza I ran into the same problem trying to use an AWS Config file with Spinnaker's Clouddriver deployed in K8s. I mounted my config file to $HOME/.aws as you would and noticed the secret mounts as root, so the spinnaker user doesn't have rights to do anything with the .aws directory resulting in the error you mentioned.
Using Spinnaker custom configuration I have my config file being mounted to a non-standard AWS location and use the AWS_CONFIG_FILE environment variable to point to the config file. This way, when an AWS CLI command is ran, the spinnaker user will create and own the .aws folder, allowing the cli folder to be created and commands to successfully run. I tested this using aws ecr get-authorization-token --profile {SOME_PROFILE} and it worked as expected.
Here's my clouddriver.yml as an example:
env:
AWS_CONFIG_FILE: /home/spinnaker/tmp/config
kubernetes:
volumes:
- id: aws-profile
mountPath: /home/spinnaker/tmp
type: secret
I know this specifically doesn't solve the problem but at least it's a workaround.
Thanks a lot @jhindulak I think this will work nicely!
This worked for me
export HOME=/tmp
aws s3 ls s3://MY_BUCKET
Hi all, I've reopened this after further requests for this functionality. Other new features may affect how this can be implemented. For example, there is now a separate cache file that is written to when using AWS SSO. By default, this is at ~/.aws/sso/cache on Linux and Mac operating systems. So, even if we added an option for controlling where the assume role cache is written, it would not affect other configuration writes to disk.
I'd like to gather more feedback on the change with that information in mind. Thanks!
@kdaily Thanks for reopening. What are your thoughts on standardizing all the credential caches under a single parent directory, for example ~/.aws/cache (or credentialcache), and then each separate feature would have a sub directory under that one, for example ~/.aws/cache/sso or ~/.aws/cache/cli. Then we could have an environment variable that changes the location of the parent directory, and that would affect all of the credential cache dirs.
We have the same issue and @jeremydonahue's suggestions for a single cache dir that is configurable would solve for our use-case. Our use-case is that on local k8s clusters (kind/microk8s) we mount the .aws secret read only as some containers needs aws access to s3 and ecr.
Having an environment variable to redirect the cache folder would solve my issue.
I found this related issue: https://github.com/aws/aws-cli/issues/1804. Is there enough overlap for these to be considered duplicates? Please let us know if there are any major distinctions to make between the issues, otherwise I think they should be consolidated.
Yeah, i think they both asking for the same solution - being able to redirect the .aws directory.
Thanks for confirming - I'll go ahead and close this issue then and we can continue tracking https://github.com/aws/aws-cli/issues/1804. If anyone has anything else to add regarding this please let us know here or consider creating a new issue.
⚠️COMMENT VISIBILITY WARNING⚠️
Comments on closed issues are hard for our team to see. If you need more assistance, please open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.