kaniko
kaniko copied to clipboard
COPY changes permissions on files
Actual behavior
Using the COPY
command (v1.6.0
) the resultant files are getting write permissions added to group/other.
Expected behavior I expect the permissions on the file to match those from the source.
To Reproduce
After noticing the issue (only by happenstance from looking at logs from a running container complaining about file permissions) I went to check my various other repos/projects and notice they all do the same thing :( I don't have an exhaustive list of which versions may be affected however. Importantly, the issue is not isolated to just the one Dockerfile
or project but seems to be present in all projects where I'm using COPY
.
# .gitlab-ci.yml
stages:
- build
variables:
DOCKER_REGISTRY: "${CI_REGISTRY}"
DOCKER_IMAGE: "${CI_REGISTRY_IMAGE}"
DOCKER_USERNAME: "${CI_REGISTRY_USER}"
DOCKER_PASSWORD: "${CI_REGISTRY_PASSWORD}"
build:
image:
name: gcr.io/kaniko-project/executor:v1.6.0-debug
entrypoint: [""]
tags:
- executor-kubernetes
stage: build
only:
- branches
- tags
script: |
if [ -z $CI_BUILD_TAG ];then
export DOCKER_TAG="${CI_COMMIT_REF_SLUG}"
else
export DOCKER_TAG="${CI_BUILD_TAG}"
fi
FORMATTEDTAGLIST="--destination ${DOCKER_IMAGE}:${DOCKER_TAG}"
# tag master as latest
if [ "${CI_COMMIT_REF_NAME}" == "master" ];then
FORMATTEDTAGLIST="${FORMATTEDTAGLIST} --destination ${DOCKER_IMAGE}:latest";
fi
# tag with versioned numbers
if [[ "${CI_COMMIT_REF_NAME}" == "production" || "${CI_COMMIT_REF_NAME}" == "staging" || "${CI_COMMIT_REF_NAME}" == "master" ]];then
FORMATTEDTAGLIST="${FORMATTEDTAGLIST} --destination ${DOCKER_IMAGE}:${DOCKER_TAG}-${CI_JOB_ID}"
fi
mkdir -p /kaniko/.docker
echo "{\"auths\":{\"$DOCKER_REGISTRY\":{\"auth\":\"$(echo -n $DOCKER_USERNAME:$DOCKER_PASSWORD | base64)\"}}}" > /kaniko/.docker/config.json
/kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile $FORMATTEDTAGLIST --cache=true --cache-repo "${DOCKER_IMAGE}/kaniko-cache" --cache-ttl 672h
Additional Information
- Dockerfile
FROM alpine:latest
...
COPY cron/15min/* /etc/periodic/15min/
COPY sasl2/* /etc/postfix/sasl2/
COPY supervisor/supervisord.conf /etc/supervisord.conf
COPY scripts/* /usr/local/bin/
- Build Context
# from local checkout
ls -l cron/15min/
total 4
-rwxr-xr-x 1 thansen users 149 Jan 8 12:40 mail-db-rebuild
ls -l sasl2/
total 4
-rw-r--r-- 1 thansen users 0 Nov 5 2017 sasldb2
-rw-r--r-- 1 thansen users 266 Nov 5 2017 smtpd.conf.example
ls -l scripts/
total 11924
-rwxr-xr-x 1 thansen users 715 Jan 9 10:06 postfix-daemon.sh
-rwxr-xr-x 1 thansen users 12197363 Jan 8 17:16 postfix_exporter
-rwxr-xr-x 1 thansen users 802 Jan 8 13:16 start.sh
-rwxr-xr-x 1 thansen users 56 Jan 8 18:25 syslog-stdout.sh
ls -l supervisor/supervisord.conf
-rw-r--r-- 1 thansen users 1817 Jan 9 09:40 supervisor/supervisord.conf
# from docker build image
ls -l /etc/periodic/15min/
total 4
-rwxr-xr-x 1 root root 149 Jan 8 19:40 mail-db-rebuild
ls -l /usr/local/bin/
total 11924
-rwxr-xr-x 1 root root 691 Jan 7 21:10 postfix-daemon.sh
-rwxr-xr-x 1 root root 12197363 Jan 9 00:16 postfix_exporter
-rwxr-xr-x 1 root root 802 Jan 8 20:16 start.sh
-rwxr-xr-x 1 root root 56 Jan 9 01:25 syslog-stdout.sh
ls -l /etc/sasl2/
total 4
-rw-r--r-- 1 root root 0 Nov 6 2017 sasldb2
-rw-r--r-- 1 root root 266 Nov 6 2017 smtpd.conf.example
ls -l /etc/supervisord.conf
-rw-r--r-- 1 root root 1817 Jan 9 16:40 /etc/supervisord.conf
# from kaniko image
ls -l /etc/periodic/15min/
total 4
-rwxrwxrwx 1 root root 149 Jan 9 17:09 mail-db-rebuild
ls -l /usr/local/bin/
total 11924
-rwxrwxrwx 1 root root 715 Jan 9 17:09 postfix-daemon.sh
-rwxrwxrwx 1 root root 12197363 Jan 9 17:09 postfix_exporter
-rwxrwxrwx 1 root root 802 Jan 9 17:09 start.sh
-rwxrwxrwx 1 root root 56 Jan 9 17:09 syslog-stdout.sh
ls -l /etc/sasl2/
total 4
-rw-rw-rw- 1 root root 0 Jan 9 17:09 sasldb2
-rw-rw-rw- 1 root root 266 Jan 9 17:09 smtpd.conf.example
ls -l /etc/supervisord.conf
-rw-rw-rw- 1 root root 1817 Jan 9 17:09 /etc/supervisord.conf
- Kaniko Image
gcr.io/kaniko-project/executor@sha256:fcccd2ab9f3892e33fc7f2e950c8e4fc665e7a4c66f6a9d70b300d7a2103592f
(v1.6.0-debug
)
Triage Notes for the Maintainers
Description | Yes/No |
---|---|
Please check if this a new feature you are proposing |
|
Please check if the build works in docker but not in kaniko |
|
Please check if this error is seen when you use --cache flag |
|
Please check if your dockerfile is a multistage dockerfile |
|
additional information
Perhaps is similar to this in some way? https://github.com/GoogleContainerTools/kaniko/issues/550 specifically this comment mentioning the writable bit https://github.com/GoogleContainerTools/kaniko/issues/550#issuecomment-470128570
COPY also changes permissions on directories.
Given the following Dockerfile
FROM alpine:latest
COPY files /tmp/
CMD ["ash"]
with the files
directory only containing
./files:
total 4
drwxr-xr-x 2 app adm 22 Apr 6 14:12 .
drwxr-xr-x 3 app adm 37 Apr 6 14:12 ..
-rw-r--r-- 1 app adm 3 Apr 6 14:12 file.txt
results after building the image with kaniko
to wrong permissions and ownership on /tmp
./tmp:
drwxr-xr-x 1 1001 1001 22 Apr 6 12:08 .
drwxr-sr-x 1 root root 6 Apr 6 12:18 ..
-rw-r--r-- 1 1001 1001 3 Apr 6 12:08 file.txt
This makes the image unusable in some circumstances due to the wrong permissions on /tmp
. Especially when running the image as an other userid than 1001.
Building the image with docker has the expected results with a mode of 1777
and permissions of 0:0
on /tmp
.
This is running on a local k8s installation with kaniko-project/executor:v1.8.1 and kaniko-project/executor:v1.9.0. The resulting image is run via a local docker installation.
I can confirm that I have the same issus with the latest executor:debug
kaniko
drwxrwxrwx 2 nodeuser nodegroup 4096 28 jun 16:30 config drwxr-xr-x 2 root root 4096 28 jun 16:30 keys drwxr-xr-x 424 nodeuser nodegroup 20480 28 jun 16:30 node_modules -rw-rw-rw- 1 nodeuser nodegroup 506597 28 jun 16:30 npm-shrinkwrap.json -rw-rw-rw- 1 nodeuser nodegroup 1953 28 jun 16:30 package.json -rw-rw-rw- 1 nodeuser nodegroup 8495 28 jun 16:30 server.js drwxrwxrwx 11 nodeuser nodegroup 4096 28 jun 16:30 src
docker build
drwxrwxr-x 2 nodeuser nodegroup 4096 5 avr 15:13 config drwxr-xr-x 2 nodeuser nodegroup 4096 5 avr 15:13 keys drwxr-xr-x 413 nodeuser nodegroup 20480 5 avr 15:13 node_modules -rw-rw-r-- 1 nodeuser nodegroup 486304 5 avr 15:13 npm-shrinkwrap.json -rw-rw-r-- 1 nodeuser nodegroup 1940 5 avr 15:13 package.json -rw-rw-r-- 1 nodeuser nodegroup 8069 5 avr 15:13 server.js drwxrwxr-x 11 nodeuser nodegroup 4096 5 avr 15:13 src
This makes the image unusable in some circumstances due to the wrong permissions on /tmp.
I believe #2192 may be responsible (or contributing anyway) to the issue.
After some research I believe this is not a bug with kaniko at all but rather a quirk of gitlab ci: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/28867
@DoumLaberge @jgkirschbaum You both say you replicated this, but it wasn't clear whether you also ran it via gitlab ci or elsewhere.
I attempted to replicate this locally (see https://github.com/dradetsky/replicate-kaniko-1876-n-related), and did not succeed. So while I buy Travis's explanation that this was caused by gitlab ci's umask settings, it would be more certain if you said where/how you replicated this issue.
If I use your Dockerfile I get the following
/ # ls -la /home/
total 4
drwxr-xr-x 1 1001 1001 22 Aug 26 07:47 .
drwxr-xr-x 1 root root 29 Aug 26 07:47 ..
-rw-r--r-- 1 1001 1001 3 Aug 26 07:47 file.txt
which is correct. If you execute the image with an other userid you won't be able to write into /home
.
In my case (I elaborated my issue) you won't be able to write to /tmp
which is not what I'm expecting.
@jgkirschbaum so Travis's issue was that permissions were not being set as he expected. I interpreted your issue to be that both permissions were not being set as expected (I see one difference in permission bits between files in your example), and also that ownership was not being set as expected. It seemed like the latter issue was more significant for you, so that was what you focused on, but that both issues were observed, correct?
IMO, we should regard unexpected permissions and unexpected ownership as two separate issues. They probably have different causes and different levels of can-we-work-around-this-somehow. For example, I personally never noticed the ownership issue since I took over a CI project where the container builds were already running as root. Which is not to say your ownership issue doesn't matter, just that I think you ought to create a new issue in the repo just for it.
EDIT: but insofar as you have any trouble with permission bits, you should add that & add clarification of where you were running kaniko when you observed it.
Hello,
our env:
- Debian Buster
- Gitlab-ce (15.3.1-ce.0)
- Gitlab-runner (15.3.0)
- Kaniko (gcr.io/kaniko-project/executor:debug)
I have also this issue, that all files and directories looks like this:
drwxrwxrwx 1 app app 4096 Aug 31 08:36 bank_proxy
drwxrwxrwx 1 app app 4096 Aug 31 08:16 certs
-rw-rw-rw- 1 app app 424 Aug 31 08:16 entrypoint.dev.sh.default
-rw-rw-rw- 1 app app 150 Aug 31 08:16 entrypoint.live.sh
-rwxrwxrwx 1 app app 208 Aug 31 08:16 entrypoint.test.sh
drwxrwxrwx 1 app app 4096 Aug 31 08:16 main_app
-rw-rw-rw- 1 app app 664 Aug 31 08:16 manage.py
-rw-rw-rw- 1 app app 150 Aug 31 08:16 requirements.txt
drwxr-sr-x 3 100 app 4096 Mar 29 14:10 staticfiles
I've also set on gitlab the FEATURE to disable the UMASK 000:
feature flags: FF_DISABLE_UMASK_FOR_DOCKER_EXECUTOR:true
More output:
...
Running with gitlab-runner 15.3.0 (bbcb5aba)
on Shared Kaniko runner x_QpvNvK
feature flags: FF_DISABLE_UMASK_FOR_DOCKER_EXECUTOR:true
Preparing the "docker" executor
00:0[2](https://git.exmaple.com/example_internal/mastercard-send/-/jobs/13687#L2)
Using Docker executor with image gcr.io/kaniko-project/executor:debug ...
Pulling docker image gcr.io/kaniko-project/executor:debug ...
Using docker image sha256:8cab[3](https://git.example.com/example_internal/mastercard-send/-/jobs/13687#L3)7f8[4](https://git.example.com/example_internal/mastercard-send/-/jobs/13687#L4)a44db3824e83b64dfe01c06364187[5](https://git.example.com/example_internal/mastercard-send/-/jobs/13687#L5)[6](https://git.example.com/example_internal/mastercard-send/-/jobs/13687#L6)d8db64042[7](https://git.example.com/example_internal/mastercard-send/-/jobs/13687#L7)9239bd749ce6af for gcr.io/kaniko-project/executor:debug with digest gcr.io/kaniko-project/executor@sha256:a3f[8](https://git.example.com/example_internal/mastercard-send/-/jobs/13687#L8)c85c6a0fa490b9a619bdb503c8cb98fb2fd17f96d0e4356a0bd65c1e5056 ...
Not using umask - FF_DISABLE_UMASK_FOR_DOCKER_EXECUTOR is set!
...
I tried a few workarounds .. but no success ...
cu denny
@linuxmail can you replicate this outside of gitlab ci? If not, we have to assume it's a gitlab issue.
Hi,
I will try it, but may takes a while.
@linuxmail it might be easier if you start with the replication repo I made for the original issue: https://github.com/dradetsky/replicate-kaniko-1876-n-related
Hello @dradetsky
I can confirm, that it not happens if I build just with Kaniko on my own host.
docker run \
-v `pwd`:/workspace \
-v `pwd`/.docker/config.json:/kaniko/.docker/config.json:ro \
--network host \
gcr.io/kaniko-project/executor:debug \
--context dir:///workspace/app --dockerfile ./app/Dockerfile.test --cache=true --cache-copy-layers=true --cache-ttl=24h --destination git.example.com:5555/example_internal/mastercard-send:debug --label org.opencontainers.image.ref.name=git.exampple.com:5555/example_internal/mastercard-send:debug
~/web $ ls -l
total 40
drwxr-x--- 1 app app 4096 Sep 7 08:57 bank_proxy
drwxr-x--- 1 app app 4096 Sep 7 08:57 certs
-rw-r----- 1 app app 424 Sep 7 08:57 entrypoint.dev.sh.default
-rw-r----- 1 app app 150 Sep 7 08:57 entrypoint.live.sh
-rwxr-x--x 1 app app 208 Sep 7 08:57 entrypoint.test.sh
drwxr-x--- 1 app app 4096 Sep 7 08:57 main_app
-rw-r----- 1 app app 664 Sep 7 08:57 manage.py
-rw-r----- 1 app app 150 Sep 7 08:57 requirements.txt
drwxr-sr-x 1 app app 4096 Sep 7 08:57 staticfiles
So I assume, it is a bug on Gitlab(-runner). Thanks for the scripts :-)
Update
I tried also again with Gitlab to see, if it also works with executor:v1.6.0-debug / and executer:debug .. and what should I say .. it works inside Gitlab too ! I assume, they fixed something to make it work again. Now the permissions looks like I expecting.
cu denny
This is almost certainly just a gitlab thing & should be closed. @jgkirschbaum should make a separate issue about ownership (if he wants) to reduce confusion.
Agreed!
This is almost certainly just a gitlab thing & should be closed. @jgkirschbaum should make a separate issue about ownership (if he wants) to reduce confusion.
I made a separate issue #2240