kaniko icon indicating copy to clipboard operation
kaniko copied to clipboard

Building multiple images with `--cleanup` fails in v1.5.0 when calling `chmod`

Open jmmk opened this issue 3 years ago • 19 comments

Actual behavior

When building multiple images (using --cleanup on each of them), the first build succeeds but subsequent builds fail with the following error:

ERROR: Process exited immediately after creation. See output below
Executing sh script inside container kaniko of pod jobName-branchName-buildNumber-cpmb7-q99pr-0v57c
OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "chdir to cwd (\"/workspace\") set in config.json failed: no such file or directory": unknown

This error occurs when running chmod +x docker-entrypoint.sh inside the Kaniko container in a Jenkins build.

Expected behavior

In v1.3.0, all builds run with no errors, and the same is expected of v1.5.0.

To Reproduce

This is inside a loop in a Jenkins build. Each iteration writes a Dockerfile to the current working directory, which is a shared mounted volume like /home/jenkins/agent/workspace/jobName_branchName.

        container('kaniko') {
          script {
            jobs.each { job ->
              writeFile(file: 'Dockerfile', text: job.dockerfile)
              writeFile(file: 'docker-entrypoint.sh', text: job.entrypoint)
              sh 'chmod +x docker-entrypoint.sh'
              sh "/kaniko/executor -f `pwd`/Dockerfile -c `pwd` --cache=true --destination=${job.dockerImage}:${job.tag} --cleanup"
            }
          }
        }

There is also a volume mount at /kaniko/.docker with credentials in config.json.

I can confirm that removing --cleanup causes the build to succeed (though it is not a good workaround because the file system will be dirty and can corrupt the images).

Additional Information

  • Dockerfile

The Dockerfile is generated dynamically, but here is an example of the final output:

FROM python:3.6.12-buster

WORKDIR /app

COPY docker-entrypoint.sh .

COPY requirements.txt .

RUN pip3 install -r ./requirements.txt

COPY myapp .

ENTRYPOINT [ "./docker-entrypoint.sh" ]

CMD [ "python3", "myscript.py" ]
  • Kaniko Image (fully qualified with digest)

Working image: v1.3.0-debug, sha256:473d6dfb011c69f32192e668d86a47c0235791e7e857c870ad70c5e86ec07e8c Failing image: v1.5.0-debug, sha256:a0f4fc8cbd93a94ad5ab2148b44b80d26eb42611330b598d77f3f591f606379a

Triage Notes for the Maintainers

Description Yes/No
Please check if this a new feature you are proposing
  • - [ ]
Please check if the build works in docker but not in kaniko
  • - [ ]
Please check if this error is seen when you use --cache flag
  • - [x]
Please check if your dockerfile is a multistage dockerfile
  • - [ ]

jmmk avatar Feb 17 '21 18:02 jmmk

Hi, I am also seeing this issue in my Jenkins build setup using a Kaniko container. The first image is successfully created, subsequent stages building other images do fail. Rolling back to v1.3.0 resolves the issue and the pipeline with the identical Jenkinsfile works then.

I am also using the --cleanup flag to clean the filesystem. I am not considering to remove the --cleanup flag as a solution at the moment.

jbogdahn avatar Feb 18 '21 11:02 jbogdahn

I'm also seeing this. Reverting to 1.3.0 fixes the issue for now.

scar-lovevery avatar Feb 26 '21 02:02 scar-lovevery

Anyone find a workaround for this? I noticed the latest release (1.6.0) still results in this same error.

austinorth avatar May 07 '21 15:05 austinorth

@austinorth I pinned the build using --cleanup to version 1.3.0. Until we hear something back and unless you require a feature from recent builds, that seems the best way to go.

jmmk avatar May 07 '21 18:05 jmmk

I ran into this error late Friday..... with fresh eyes this morning I worked with kubectl to directly create a kaniko pod/container, and kubectl exec into it as I assume Jenkins does...

I found that the --cleanup command was removing the /workspace directory and that subsequent kubectl exec commands fail because the WORKDIR /workspace no longer exists.

The quick and dirty work around that I am using is to append && mkdir -p /workspace to the end of my /kaniko/executor command...

using @jmmk's example.....

        container('kaniko') {
          script {
            jobs.each { job ->
              writeFile(file: 'Dockerfile', text: job.dockerfile)
              writeFile(file: 'docker-entrypoint.sh', text: job.entrypoint)
              sh 'chmod +x docker-entrypoint.sh'
              sh "/kaniko/executor -f `pwd`/Dockerfile -c `pwd` --cache=true --destination=${job.dockerImage}:${job.tag} --cleanup && mkdir -p /workspace"
            }
          }
        }

LelandSindt avatar Aug 09 '21 16:08 LelandSindt

.... to me this begs the question... Should --cleanup remove the contents of /workspace rather than removing /workspace entirely?

LelandSindt avatar Aug 09 '21 16:08 LelandSindt

Ran into this same issue today, the --cleanup && mkdir -p /workspace trick worked.

I would second @LelandSindt that --cleanup should be more targeted in how it cleans up directories, or we need another solution that's Jenkins-aware so that this doesn't blow up Jenkins the way it currently does.

No lie, this was pretty surprising to run into considering Kaniko claims to focus on the Kubernetes experience and Jenkins is likely the primary way folks are going to be doing CI/CD with Kaniko. If you're using Jenkins and building more than one image in the same container, you're gonna hit this for sure. 🐛

medavisjr avatar Sep 17 '21 21:09 medavisjr

In v1.6.0 kaniko's --cleanup flag deletes the /busybox directory as well. So you cant mkdir /workspace since the mkdir executable is located in /busybox.

Undocumented "feature" is to add /busybox in --ignore-paths so that --cleanup doesn't delete it.

using the example above:

       container('kaniko') {
          script {
            jobs.each { job ->
              writeFile(file: 'Dockerfile', text: job.dockerfile)
              writeFile(file: 'docker-entrypoint.sh', text: job.entrypoint)
              sh 'chmod +x docker-entrypoint.sh'
              sh "/kaniko/executor -f `pwd`/Dockerfile -c `pwd` --cache=true --ignore-path=/busybox --destination=${job.dockerImage}:${job.tag} --cleanup && mkdir -p /workspace"
            }
          }
        }

KeeganOP avatar Oct 18 '21 12:10 KeeganOP

I encountered similar issue (no such file or directory) and the workaround from @KeeganOP works perfect. However I'm not using the --cleanup flag in my build step. Since --cleanup is by default false, I'm wondering why my workspace directory is also removed.

allenhsu avatar Nov 29 '21 03:11 allenhsu

@allenhsu we also had the issue without using --cleanup. Are you using a multi-stage build? I believe those do the equivalent of --cleanup between each stage.

hundt-corbalt avatar Jun 16 '22 16:06 hundt-corbalt

+1 Maybe there should be a flag to skip the cleanup (for multi-stage or single stage builds)?

Usually pipelines in Kubernetes environments runs in isolated workloads (e.g. GitLab Runners with executor: kubernetes, etc.) so maybe the cleanup filesystem task could be skipped.

kladiv avatar Feb 06 '23 14:02 kladiv

When is this going to be fixed? I have a requirement to build multiple images using 1 container, all Dockerfile's container chmod, AND I need to save and push them, using the --tarpath flag and push to an ECR using Crane.

I've reverted to v1.3.0 so that the --cleanup flag works, but the --tarpath flag is not available on that version!

Mo0rBy avatar Aug 16 '23 17:08 Mo0rBy

@Mo0rBy i have this problem with all verssions of kaniko from 1.0.0 to latest 1.16.0 (even with version gcr.io/kaniko-project/executor:v1.3.0-debug that @Mo0rBy said it works)

dadurex avatar Sep 27 '23 09:09 dadurex

@dadurex It stopped working for me so now I have to spin up multiple Kaniko containers to build all my images. It's approximately 10 containers which is annoying, but it works.

Mo0rBy avatar Sep 27 '23 11:09 Mo0rBy

A solution for us is to change the remote FS in the container settings, to avoid Kaniko to delete it image

pablogrs avatar Nov 17 '23 14:11 pablogrs

A solution for us is to change the remote FS in the container settings, to avoid Kaniko to delete it image

Could you add a larger screenshot to see where this setting is please? I will need to find it in the UI then figure out how to set it within my podTemplate yaml for the Kaniko container

Mo0rBy avatar Nov 17 '23 14:11 Mo0rBy

In fairness I am not using Kubernetes but Docker in ECS config would be in https://your-jenkins.com/manage/configureClouds/ and this is in "ECS agent templates" it would be similar to docker agent template, I assume there will be something similar for K8s

There is a solution for K8s in cloudbees https://docs.cloudbees.com/docs/cloudbees-ci-kb/latest/cloudbees-ci-on-modern-cloud-platforms/what-you-need-to-know-when-using-kaniko-from-kubernetes-jenkins-agents

pablogrs avatar Nov 17 '23 15:11 pablogrs

In fairness I am not using Kubernetes but Docker in ECS config would be in https://your-jenkins.com/manage/configureClouds/ and this is in "ECS agent templates" it would be similar to docker agent template, I assume there will be something similar for K8s

There is a solution for K8s in cloudbees https://docs.cloudbees.com/docs/cloudbees-ci-kb/latest/cloudbees-ci-on-modern-cloud-platforms/what-you-need-to-know-when-using-kaniko-from-kubernetes-jenkins-agents

I followed that 2nd link to setup Kaniko initially, I don't believe we have any directory issues, just not able to use --cleanup. I'm not going to bother messing with these workingDir's as it all currently works and I'll break it lol. Thank you though!

Mo0rBy avatar Nov 17 '23 15:11 Mo0rBy

In v1.6.0 kaniko's --cleanup flag deletes the /busybox directory as well. So you cant mkdir /workspace since the mkdir executable is located in /busybox.

Undocumented "feature" is to add /busybox in --ignore-paths so that --cleanup doesn't delete it.

using the example above:

       container('kaniko') {
          script {
            jobs.each { job ->
              writeFile(file: 'Dockerfile', text: job.dockerfile)
              writeFile(file: 'docker-entrypoint.sh', text: job.entrypoint)
              sh 'chmod +x docker-entrypoint.sh'
              sh "/kaniko/executor -f `pwd`/Dockerfile -c `pwd` --cache=true --ignore-path=/busybox --destination=${job.dockerImage}:${job.tag} --cleanup && mkdir -p /workspace"
            }
          }
        }

I also had the same error as below.

ERROR: Process exited immediately after creation. See output below
Executing sh script inside container kaniko of pod docker-main-16-5rxz8-0f2qt-c70qf
OCI runtime exec failed: exec failed: unable to start container process: chdir to cwd ("/workspace") set in config.json failed: no such file or directory: unknown

I solved it by adding --cleanup && mkdir -p /workspace.

Zerohertz avatar Dec 01 '23 04:12 Zerohertz