kaniko
kaniko copied to clipboard
error building image: deleting file system after stage 0: directory not empty
Actual behavior I'm currently using multi-stage builds from docker, the first one builds artifacts (with gradle) and the second one runs the application. I got an error when starting the second one:
...
BUILD SUCCESSFUL in 5m 30s
75 actionable tasks: 75 executed
INFO[4014] Taking snapshot of full filesystem...
INFO[4085] RUN cd ${REPO}/build/distributions && unzip -q software.zip -d /usr/local && rm -f software.zip
INFO[4085] cmd: /bin/sh
INFO[4085] args: [-c cd ${REPO}/build/distributions && unzip -q software.zip -d /usr/local && rm -f software.zip]
INFO[4085] Running: [/bin/sh -c cd ${REPO}/build/distributions && unzip -q software.zip -d /usr/local && rm -f software.zip]
INFO[4094] Taking snapshot of full filesystem...
INFO[4137] ARG JAVA_RUN_VERSION
INFO[4137] Saving file usr/local/software for later use
INFO[4139] Deleting filesystem...
error building image: deleting file system after stage 0: unlinkat //root/.gradle: directory not empty
script returned exit code 1
As you can see, the second tries to delete the filesystem from the first one but it says directory not empty.
Expected behavior I would expect that kaniko deletes the filesystem, even if the directories are not empty
To Reproduce I'm using the following args: /kaniko/executor -f Dockerfile-java11 -c ... --cache=false --snapshotMode=time
Additional Information
- Dockerfile:
FROM abc AS builder
ARG BRANCH
ARG ATC_NEXUS_REPOURL
ARG ATC_NEXUS_REPO_ID
ARG REPO_URL
ARG REPO
ARG PROJECT
ARG REPOS_HASH
WORKDIR "/usr/local/src/"
RUN git clone \
--single-branch \
--branch "${BRANCH}" "${REPO_URL}" "${REPO}"
RUN cd ${REPO} && \
./gradlew --no-daemon buildParentRepos \
--continue \
--stacktrace \
-PnoIncludes=true && \
./gradlew --no-daemon build \
-x test \
-PnoIncludes=true
RUN cd ${REPO}/build/distributions && \
unzip -q software.zip -d /usr/local && \
rm -f software.zip && \
rm -rf /root/.gradle
# Runtime Container #############################################################################
ARG JAVA_RUN_VERSION
FROM azul/zulu-openjdk-alpine:${JAVA_RUN_VERSION:-11}
ENV APP_ROOT="/opt" \
RUN set -eo pipefail && \
apk add --quiet --no-cache tzdata ttf-dejavu jq imagemagick && \
apk add --quiet --no-cache --virtual .build-deps curl
WORKDIR $APP_ROOT
RUN adduser -DH myuser && \
install -d -o myuser properties && touch properties/configuration.txt && \
install -d -o myuser data/node/admin-scripts && \
chown -R myuser $APP_ROOT
COPY --chown=myuser:myuser target ${APP_ROOT}
COPY --chown=myuser:myuser --from=builder /usr/local/software "${APP_ROOT}/software"
USER myuser
ENTRYPOINT java -classpath ...
- Kaniko Image:
kaniko-project/executor:v1.6.0-debug
Triage Notes for the Maintainers
Description | Yes/No |
---|---|
Please check if this a new feature you are proposing |
|
Please check if the build works in docker but not in kaniko |
|
Please check if this error is seen when you use --cache flag |
|
Please check if your dockerfile is a multistage dockerfile |
|
Hi, im having the same issue...
I'm running kaniko on a GitLab Pipeline. While it seems to work local on my docker
docker run --entrypoint "" -v $PWD/config.json:/kaniko/.docker/config.json -v $PWD:/workspace gcr.io/kaniko-project/executor:debug /kaniko/executor --context /workspace --dockerfile /workspace/docker/server/Dockerfile --destination ideaplexus/conductor-server
Hello @all,
i have the same problem.
im Building the Images with kaniko 1.6.0-debug in a Gitlab ci adapted like this link ... https://docs.gitlab.com/ee/ci/docker/using_kaniko.html#building-a-docker-image-with-kaniko
same behavior with and without cache in a multi-stage Dockerfile
Have same issue, but it reproduce only sometimes. Me too using multi-stage builds from docker, the first one builds artifacts (with gradle) and the second one runs the application.
15:25:49 [36mINFO[0m[0258] Deleting filesystem...
15:25:49 error building image: deleting file system after stage 0: unlinkat /home/buildUser/.gradle/caches/modules-2: directory not empty
ERROR: script returned exit code 1
...
Anyone know at least workaround or how to reproduce it stable?
Hello @ivan-kysil-sp,
we found a Solution for this Kind of "bug" we changed the destination of our Code.
Change this: /home/SystemUser/... to: /opt/srv/...
@TheCleaner:
- with which parameter / environment variable or how?
- why this path? on my setup he requested
//usr
and i do not know why.
Any update on this? I see a new version of Kaniko came out on 12/15/2023.
same issue here 2024 any update? ignore-path/cache didn't work for me
We're also seeing the same behavior, and is inconsistent. It used to work and suddenly it started failing, with no changes to the Dockerfile, nor the actual build command (but we can't exactly pin-point which kaniko's image version stopped working for us: for contex, we're using "latest" debug
image).
This error just surfaced today in a GitLab pipeline.
kaniko:v1.22.0-debug
and a multistage pipeline.
April 29th this identical pipeline passed. Today it seems to fail consistently with
INFO[0010] Deleting filesystem...
147error building image: deleting file system after stage 0: unlinkat //bin: directory not empty
This error just surfaced today in a GitLab pipeline.
kaniko:v1.22.0-debug
and a multistage pipeline.April 29th this identical pipeline passed. Today it seems to fail consistently with
INFO[0010] Deleting filesystem... 147error building image: deleting file system after stage 0: unlinkat //bin: directory not empty
@josemine discovered two of our VM hosts that act as our GitLab Runners were migrated. During this migration the XFS filesystem was created with ftype=0. Overlay2 needs ftype=1. Once this was corrected, kaniko
behaved again.
Unsure if this is what caused everyone else's issue, but that was the underlying cause for our intermittent errors. Occasionally the build would be scheduled on a migrated host.
Hi,
I have been facing the following issue for a long time, but I didn’t pay much attention to it since it worked when I retried the GitLab Runner. I am using Kubernetes (K8s) with Karpenter to manage GitLab Runners.
I lack expertise in the field of K8s, so I am unsure about the following issue. Based on what @paisleyrob mentioned, could the problem be caused by nodes creating pods with different filesystems? If so can we fix the filesystem of the pod being created by gitlab-runner?
error building image: deleting file system after stage 0: unlinkat //root/.gradle/caches/modules-2: directory not empty
I was seeing the following error and can this be casued by the cache option that I am using?
Thanks in advance.
@ChobobDev Our issue was resolved once we corrected the XFS ftype
value.
The XFS man page indicates ftype:
This feature allows the inode type to be stored in the directory structure so that the readdir(3) and getdents(2) do not need to look up the inode to determine the inode type.
The value is either 0 or 1, with 1 signifying that filetype information will be stored in the directory structure. The default value is 1.
This stackexchange question/answer goes into how to fix an existing XFS file system
@paisleyrob deeply appreciated with your sharing. I will try to work on it to see if it also resolves the situation that I am facing atm.
Many thanks :)
Hi,
I have been facing the following issue for a long time, but I didn’t pay much attention to it since it worked when I retried the GitLab Runner. I am using Kubernetes (K8s) with Karpenter to manage GitLab Runners.
I lack expertise in the field of K8s, so I am unsure about the following issue. Based on what @paisleyrob mentioned, could the problem be caused by nodes creating pods with different filesystems? If so can we fix the filesystem of the pod being created by gitlab-runner?
error building image: deleting file system after stage 0: unlinkat //root/.gradle/caches/modules-2: directory not empty
I was seeing the following error and can this be casued by the cache option that I am using?
Thanks in advance.
I have exactly same issue trying to create a base image with the dependencies included into it. We are using gcr.io/kaniko-project/executor:v1.12.1-debug for build
We re using following command
/kaniko/executor
--registry-mirror my.mirror
--compressed-caching=false
--context ${DOCKERFILE_CONTEXT}
--dockerfile ${DOCKERFILE_CONTEXT}/$DOCKERFILE_NAME
--destination ${DESTINATON_PATH}:latest
We are in GitLab 17.0.1
The issue is random and a retry on the failing job makes the job successfull