aws-cli
aws-cli copied to clipboard
Why is this the official Docker container running with `id=0` (AKA `root` user and following security permissions)?
Confirm by changing [ ] to [x] below:
- [x] I've gone though the User Guide and the API reference
- [x] I've searched for previous similar issues and didn't find any solution
Issue is about usage on:
- [ ] Service API : I want to do X using Y service, what should I do?
- [x] CLI : passing arguments or cli configurations.
- [x] Other/Not sure.
Platform/OS/Hardware/Device What are you running the cli on? Docker. Particularly;
-> $ docker images | grep aws
amazon/aws-cli latest 886e608c1999 3 days ago 287MB
Describe the question
Why is this the official Docker container running with
id=0
(AKAroot
user and following security permissions)?
This I find problematic for the following reasons;
- All writes performed (as and when instructed by the user to the CLI) to the filesystem results in files created as owned by root.
- Example; (1) new user uses official docker image, and (2) logs in. Now (3) has a
~/.aws
folder (and files within) owned by root, and maybe not readable by the user's own user account on system.
- Example; (1) new user uses official docker image, and (2) logs in. Now (3) has a
- This is already a recommended security practice, tons of blog posts and fearmongering stories to be found after cursory web searches. Here's Docker's own official take; https://docs.docker.com/engine/security/rootless/
- This breaks most OS's paradigm of
I struggle to understand why theA user's files and folders in said user's home directory should by default be readable and writable/executable (where applicable) by the user's own permissions - however implemented by OS.
aws
CLI should necessitate to belong to the small list of exceptions that normally transcend this paradigms' rule.
However, there certainly are tools and limitations which require some sort of sudo permissions on Linux-based systems (such like ping
and other issues listed here).
I'm opening this issue with the hope of exploring following topics/questions;
- Has this ever been considered/discussed/explored beyond "coffe talk over lunch" anywhere? I couldn't find any such conversation in any issue. Would help to know what thoughts have been put into this (if any) previously.
- How much effort would it be to ensure that no
root
permissions are ever needed by default? There's nothing stopping a user from forcing a docker container to run with root priveliges on docker host machine...
Logs/output
Get full traceback and error logs by adding --debug
to the command.
Not applicable.
Hi @x10an14, thanks for the suggestion.
I understand point 1. This could be mitigated by doing a groupadd
and a useradd
to add an application user, and then the Docker command USER
in the Dockerfile to change to that user and run subsequent commands. There are at least two options there:
- Add the group and user prior to installing the client and install it as that user.
- Install the client as root and then only switch to the application user at the end so that by default container image users are dropped in as that user.
These set up different user experiences and expectations.
For point 2, I think that the link you refer to (https://docs.docker.com/engine/security/rootless/) is regarding running the Docker daemon as a non-root user, not running an application inside of a container as a non-root user. This section of the Docker Security page (https://docs.docker.com/engine/security/security/#linux-kernel-capabilities) hints at some of the reasons you've outlined. I could also easily find a number of references as to why a non-root application user is desirable.
Have you tried to modify the current Dockerfile to not run as root by default to see if it's as straightforward as suggested?
I think this issue warrants some more discussion, so marking it as such.
For point 2, I think that the link you refer to (https://docs.docker.com/engine/security/rootless/) is regarding running the Docker daemon as a non-root user, not running an application inside of a container as a non-root user. This section of the Docker Security page (https://docs.docker.com/engine/security/security/#linux-kernel-capabilities) hints at some of the reasons you've outlined. I could also easily find a number of references as to why a non-root application user is desirable.
Thanks! =) I appreciate your flexibility and understanding in my mistakes!
Considering that on the technical level, we're discussing numerical user (Uid) and group (Gid) IDs (which by paradigm always is 0
and 0
respectively for root
user) - ~I don't understand why your option 2 would be preferable to option 1~.
~The only reason why I could imagine this could be the case, is if aws-cli
requires root
for something - which I would like to understand better the rationale behind.~
~If it is indeed the case for a CLI interface that is just a local "middle-man" for REST HTTP requests to AWS REST HTTP APIs.~
My rationale behind my claim that leveraging root
is entirely unnecessary in Dockerfile
's default is that any user can leverage the --user
flag available on the docker run
command.
An example of this functionality offered by docker
in practice (albeit they maintain root
default);
[2020-06-18 14:12:40] 0 x10an14@x10-desktop:~
-> $ docker pull library/node:12-slim
12-slim: Pulling from library/node
Digest: sha256:7676294fed76a8127254821edc6891a3fa304dec73117e6e18269d42fb83f3de
Status: Image is up to date for node:12-slim
docker.io/library/node:12-slim
[2020-06-18 14:14:04] 0 x10an14@x10-desktop:~
-> $ docker run --rm -it library/node:12-slim bash -c "whoami && id"
root
uid=0(root) gid=0(root) groups=0(root)
[2020-06-18 14:14:12] 0 x10an14@x10-desktop:~
-> $ docker run --rm -it --user node library/node:12-slim bash -c "whoami && id"
node
uid=1000(node) gid=1000(node) groups=1000(node)
[2020-06-18 14:14:22] 0 x10an14@x10-desktop:~
-> $
So, as you can see, library/node
has created the node
user in their Dockerfile
, but not USER
switched to the node
user within their Dockerfile
(before the ENTRYPOINT
/CMD
lines).
EDIT: I misunderstood the two options. Both are fine with wrt. end-result in my opinion. I don't think it's any security risk having the Dockerfile
install the aws-cli
as root, as long as it switches to non-root user before CMD
/ENTRYPOINT
. Ref: https://github.com/aws/aws-cli/issues/5120#issuecomment-645983515
Finally, I'd propose the following (just to be explicit);
-
Ensure
aws-cli
has no inbuilt expectations/functionality that requiresroot
/sudo
/admin
(whatever you'd like to call it) permissions at run-time. -
Create a default non-root user (how to do this depends on your base image), and
USER
switch to it in theDockerfile
at earliest possibility. From own experience, you'd need to accomplish the following withroot
before performing theUSER
switch;- Install required dependencies/tools for
aws-cli
- Install security updates/fixes for all installed packages within Docker image
- Create non-root user
- Install required dependencies/tools for
-
Inform users who claim this is backwards incompatible leverage the described
docker run
functionality described my previous/above post.
Have you tried to modify the current Dockerfile to not run as root by default to see if it's as straightforward as suggested? No, will inform you when I've gotten around to it!
PS: I tried searching for it back in April I remember, but for some reason, I couldn't find it in your repo. Would you mind passing a direct link to your latest version of it?
Another usage example of docker run
's --user
flag;
[2020-06-18 14:54:48] 0 x10an14@x10-desktop:~
-> $ mkdir testy
[2020-06-18 14:54:50] 0 x10an14@x10-desktop:~
-> $ cd testy/
[2020-06-18 14:54:52] 0 x10an14@x10-desktop:~/testy
-> $ docker run --rm --user "0":"12000" -w /meh -v "$PWD":/meh debian:10-slim sh -c "touch /meh/bleh"
[2020-06-18 14:55:02] 0 x10an14@x10-desktop:~/testy
-> $ la
total 8.0K
4.0K drwxr-xr-x 2 x10an14 x10an14 4.0K Jun 18 14:55 ./
4.0K drwxr-xr-x 57 x10an14 x10an14 4.0K Jun 18 14:54 ../
0 -rw-r--r-- 1 root 12000 0 Jun 18 14:55 bleh
[2020-06-18 14:55:10] 0 x10an14@x10-desktop:~/testy
-> $ docker run --rm -w /meh -v "$PWD":/meh debian:10-slim sh -c "touch /meh/foo"
[2020-06-18 14:56:06] 0 x10an14@x10-desktop:~/testy
-> $ la
total 8.0K
4.0K drwxr-xr-x 2 x10an14 x10an14 4.0K Jun 18 14:56 ./
4.0K drwxr-xr-x 57 x10an14 x10an14 4.0K Jun 18 14:54 ../
0 -rw-r--r-- 1 root 12000 0 Jun 18 14:55 bleh
0 -rw-r--r-- 1 root root 0 Jun 18 14:56 foo
[2020-06-18 14:56:07] 0 x10an14@x10-desktop:~/testy
-> $
EDIT: To demonstrate that one can set Gid and Uid to whatever, e.g. back to root
if one insists on such bad practices. Bad (security) practice should be opt-in, not opt-out.
I tried to run the aws docker image with flag --user "$(id -u):$(id -g)"
. With this option, new files created inside the container will have correct permissions. But I also mounted my aws config folder to root/.aws
. This folder belongs to root
. When the --user
flag is used, the container user will no longer have access to the config folder.
IMO, it's not the best practice to run the container as root. It's much better to create a user with uid 1000 and gid 1000 and run aws-cli under that user. Please take a look at Gradle's Dockerfile. I think this can be fixed using a similar way.
Hello, Any update on this ?
I'm also interested on finding a workaround/fix for this security concern. At the moment (with version aws-cli/2.3.2
), the following command fails as the provided UID:GID doesn't have permission to the container's /root
folder:
$ docker run --rm -it -v ~/.aws:/root/.aws:rw -v "$(pwd)":/aws --user "$(id -u):$(id -g)" amazon/aws-cli configure
AWS Access Key ID [None]: test
AWS Secret Access Key [None]: test
Default region name [None]:
Default output format [None]:
[Errno 13] Permission denied: '/.aws'
One workaround to the UID:GID fix and local credentials is to map the local ~/.aws
directory to /tmp
and then reference them via environment variables. With this script to inject current UID:GID, you can map the config
and credentials
files and get the proper permissions on the resultant file (I named it aws
, here for more details):
#!/bin/zsh
docker run \
--rm \
--user "$(id -u):$(id -g)" \
-e AWS_CONFIG_FILE=/tmp/.aws/config \
-e AWS_SHARED_CREDENTIALS_FILE=/tmp/.aws/credentials \
-v "$HOME/.aws:/tmp/.aws:rw" \
-v $(pwd):/aws \
amazon/aws-cli $@
$ cd ~/tmp
$ pwd
/home/gadams/tmp
$ whoami
gadams
$ aws s3 cp s3://my_bucketmadcow.jpg .
download: s3://my_bucket/madcow.jpg to ./madcow.jpg
$ ls -l
total 88
-rw-r--r-- 1 gadams gadams 86406 Mar 5 2016 madcow.jpg
With the steps above to map UID:GID, this appears to work, at least on AL2. One thing I don't know is if you are using any special service JSON files that need to be stored in the models directory (~/.aws/models
), how to reference those. Didn't find and environment variable for that.
Trying to run amazon/aws-cli on K8s with Pod Security Standards?
kubectl run --rm --attach --image amazon/aws-cli aws --restart=Never --overrides='{"spec": {"securityContext":{"runAsUser": 1}}}' --env "HOME=/tmp" -- sts get-caller-identity
it's a joke cant run this image on any more secured cluster that requires to run as non root
Trying to run amazon/aws-cli on K8s with Pod Security Standards?
kubectl run --rm --attach --image amazon/aws-cli aws --restart=Never --overrides='{"spec": {"securityContext":{"runAsUser": 1}}}' --env "HOME=/tmp" -- sts get-caller-identity
To make it explicitly clear to anyone who comes across this, the trick is setting the environment variable HOME
to be /tmp
.
yes but it is still a joke there is no reason to run this container as root i would expect more from aws folks