kaniko
kaniko copied to clipboard
If a dockerfile has a step to create a directory /workspace, kaniko fails
Actual behavior If a dockerfile has a step to create a directory /workspace, kaniko fails.
Expected behavior Parity with the docker behavior, I expected it to pass like docker build
To Reproduce Simple Dockerfile
FROM alpine:3.9
RUN mkdir /workspace
running with docker passes
docker build -f Dockerfile-workspace .
Sending build context to Docker daemon 2.816MB
Step 1/2 : FROM alpine:3.9
---> 78a2ce922f86
Step 2/2 : RUN mkdir /workspace
---> Running in 37d72caee37d
Removing intermediate container 37d72caee37d
---> c5c09e97f416
Successfully built c5c09e97f416
running with kaniko fails with mkdir: can't create directory '/workspace': File exists
docker run -v `pwd`:/kaniko-workspace/ gcr.io/kaniko-project/executor:latest --no-push --dockerfile Dockerfile-workspace --context /kaniko-workspace/ --verbosity debug --destination=image:tag --tarPath=/kaniko-workspace/image.tar
DEBU[0000] Copying file /kaniko-workspace/Dockerfile-workspace to /kaniko/Dockerfile
DEBU[0000] Skip resolving path /kaniko/Dockerfile
DEBU[0000] Skip resolving path /kaniko-workspace/
DEBU[0000] Skip resolving path /cache
DEBU[0000] Skip resolving path /kaniko-workspace/image.tar
DEBU[0000] Skip resolving path
DEBU[0000] Skip resolving path
DEBU[0000] Built stage name to index map: map[]
INFO[0000] Retrieving image manifest alpine:3.9
INFO[0000] Retrieving image alpine:3.9
DEBU[0002] No file found for cache key sha256:65b3a80ebe7471beecbc090c5b2cdd0aafeaefa0715f8f12e40dc918a3a70e32 stat /cache/sha256:65b3a80ebe7471beecbc090c5b2cdd0aafeaefa0715f8f12e40dc918a3a70e32: no such file or directory
DEBU[0002] Image alpine:3.9 not found in cache
INFO[0002] Retrieving image manifest alpine:3.9
INFO[0002] Retrieving image alpine:3.9
INFO[0003] Built cross stage deps: map[]
INFO[0003] Retrieving image manifest alpine:3.9
INFO[0003] Retrieving image alpine:3.9
DEBU[0004] No file found for cache key sha256:65b3a80ebe7471beecbc090c5b2cdd0aafeaefa0715f8f12e40dc918a3a70e32 stat /cache/sha256:65b3a80ebe7471beecbc090c5b2cdd0aafeaefa0715f8f12e40dc918a3a70e32: no such file or directory
DEBU[0004] Image alpine:3.9 not found in cache
INFO[0004] Retrieving image manifest alpine:3.9
INFO[0004] Retrieving image alpine:3.9
INFO[0005] Executing 0 build triggers
INFO[0005] Unpacking rootfs as cmd RUN mkdir /workspace requires it.
DEBU[0005] Mounted directories: [{/kaniko false} {/etc/mtab false} {/tmp/apt-key-gpghome true} {/var/run false} {/proc false} {/dev false} {/dev/pts false} {/sys false} {/sys/fs/cgroup false} {/sys/fs/cgroup/cpuset false} {/sys/fs/cgroup/cpu false} {/sys/fs/cgroup/cpuacct false} {/sys/fs/cgroup/blkio false} {/sys/fs/cgroup/memory false} {/sys/fs/cgroup/devices false} {/sys/fs/cgroup/freezer false} {/sys/fs/cgroup/net_cls false} {/sys/fs/cgroup/perf_event false} {/sys/fs/cgroup/net_prio false} {/sys/fs/cgroup/hugetlb false} {/sys/fs/cgroup/pids false} {/sys/fs/cgroup/rdma false} {/sys/fs/cgroup/systemd false} {/dev/mqueue false} {/dev/shm false} {/kaniko-workspace false} {/etc/resolv.conf false} {/etc/hostname false} {/etc/hosts false} {/proc/bus false} {/proc/fs false} {/proc/irq false} {/proc/sys false} {/proc/sysrq-trigger false} {/proc/acpi false} {/proc/kcore false} {/proc/keys false} {/proc/timer_list false} {/proc/sched_debug false} {/sys/firmware false}]
DEBU[0006] Not adding /dev because it is ignored
DEBU[0006] Not adding /etc/hostname because it is ignored
DEBU[0006] Not adding /etc/hosts because it is ignored
DEBU[0006] Not adding /etc/mtab because it is ignored
DEBU[0006] Not adding /proc because it is ignored
DEBU[0006] Not adding /sys because it is ignored
DEBU[0006] Not adding /var/run because it is ignored
INFO[0006] RUN mkdir /workspace
INFO[0006] Taking snapshot of full filesystem...
INFO[0006] cmd: /bin/sh
INFO[0006] args: [-c mkdir /workspace]
INFO[0006] Running: [/bin/sh -c mkdir /workspace]
mkdir: can't create directory '/workspace': File exists
error building image: error building stage: failed to execute command: waiting for process to exit: exit status 1
Additional Information
-
I stumbled on this issue and couldn't find any documentation or report in kaniko for this. It is easy to workaround the issue by putting
mkdir -p /workspace
but I'm worried that kaniko uses that directory internally and it could conflict with what docker is trying to do. Can one of the maintainers confirm? I can imagine more complex examples where permissions and user chown to that directory could potentially be needed for the docker container and mess things up? Is it bad practice to have a dockerfile use /workspace? If the answer is yes, in that case this should be documented, or I would prefer the kaniko code be changed so that users do not inavertently use internal directory structures needed by kaniko. -
Kaniko Image (fully qualified with digest)
"Id": "sha256:d4478ae513eed31b69e375a681fee25592cd1e3618ce28f9633af841cba24410",
"RepoTags": [
"gcr.io/kaniko-project/executor:latest"
],
Triage Notes for the Maintainers
Description | Yes/No |
---|---|
Please check if this a new feature you are proposing |
|
Please check if the build works in docker but not in kaniko |
|
Please check if this error is seen when you use --cache flag |
|
Please check if your dockerfile is a multistage dockerfile |
|
kaniko does not use this directory in the setup you provided so it would be safe to use directly. It would "use" it if you specified "--context=/workspace" but that isn't the case here so it is fine. kaniko does use essentially 2 root directories for its operation: /kaniko /<the provided --context= value>
This directory exists as a side effect of setting the WORKDIR
as /workspace
in the kaniko executor image directly when we create the kaniko executor image we release - gcr.io/kaniko-project/executor:latest
https://github.com/GoogleContainerTools/kaniko/blob/main/deploy/Dockerfile#L86
WORKDIR /workspace
I believe it is actually docker that is creating this /workspace
directory as it is specified as the WORKDIR for the gcr.io/kaniko-project/executor:latest image:
aprindle@aprindle-ssd ~/kaniko [main]docker image inspect -f '{{.Config.WorkingDir}}' gcr.io/kaniko-project/executor:latest
/workspace
A potential fix for this issue would be to change the kaniko image to not have a WORKDIR specified (or make it /kaniko or something like that), currently I am trying to decide between the relative priority of fixing this issue vs possibly breaking WORKDIR relative paths for that who might rely on that in the image.
This is also related to a problem with GitLab CI/CD, which recommends using Kaniko for Kubernetes builds.
In a multi-stage Dockerfile, Kaniko will, between stages, delete the entire filesystem except ignored paths; see https://github.com/GoogleContainerTools/kaniko/blob/8d7d925a735a1bf0d30de64036d6ee61a0cbf9ab/pkg/executor/build.go#L745
In GitLab CI/CD, many actions (such as an after_script
or saving artifacts) will try to run new commands, and try to start those commands in the WORKDIR
. Since /workspace
was deleted, then this fails.
This is documented with workarounds at https://docs.gitlab.com/runner/executors/kubernetes.html#use-kaniko-to-build-docker-images and in https://gitlab.com/gitlab-org/gitlab-runner/-/issues/30769#note_1452088669 , but obviously it would be better if the Kaniko image wasn’t built in a way that hits this issue.
While I was researching this, I got the impression (I may be mistaken) that /workspace
was previously used for a similar purpose to how /kaniko
is used today, and the WORKDIR
is probably an remnant of that.
I see that you’ve already written a fix, although it looks like the merge is blocked at the moment. I thought I’d just make a note of this here, to help users find it.