kaniko
kaniko copied to clipboard
Multi-stage builds silently crashing
Actual behavior The kaniko build silently crashes after taking the full filesystem snapshot with no useful error. Works fine with dind. Disabling the Kaniko cache doesn't help.
Expected behavior The build should complete with no issue.
To Reproduce Steps to reproduce the behavior:
- Have your Gitlab Runner in a GKE autopilot
- Run your gitlab CI with Kaniko instead of dind
Additional Information
- Dockerfile
FROM node:16 as builder
COPY . /app
WORKDIR /app
RUN yarn install --frozen-lockfile --production
FROM gcr.io/distroless/nodejs:16
COPY --from=builder /app /app
WORKDIR /app
EXPOSE 8080
CMD ["--experimental-modules", "--experimental-json-modules", "src/server.js"]
- Build Context Multistage build, first stage copies the Express JS app and installs dependencies, second stage reuses the app directory to produce a distroless image.
- Kaniko Image (fully qualified with digest)
gcr.io/kaniko-project/executor:a8498c762f34aabc62966c69169b79a04e04a4d5-debug
, v1.9.0-debug
Triage Notes for the Maintainers
CI log:
Executing "step_script" stage of the job script
01:09
$ mkdir -p /kaniko/.docker
$ echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(echo -n ${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD} | base64)\"}}}" > /kaniko/.docker/config.json
$ /kaniko/executor --context ${CI_PROJECT_DIR} --dockerfile ${CI_PROJECT_DIR}/Dockerfile --destination ${CI_REGISTRY_IMAGE}:${TAG} --destination ${CI_REGISTRY_IMAGE}:${LATEST_TAG}
INFO[0000] Resolved base name node:16 to builder
INFO[0000] Retrieving image manifest node:16
INFO[0000] Retrieving image node:16 from registry index.docker.io
INFO[0001] Retrieving image manifest gcr.io/distroless/nodejs:16
INFO[0001] Retrieving image gcr.io/distroless/nodejs:16 from registry gcr.io
INFO[0002] Built cross stage deps: map[0:[/app]]
INFO[0002] Retrieving image manifest node:16
INFO[0002] Returning cached image manifest
INFO[0002] Executing 0 build triggers
INFO[0002] Building stage 'node:16' [idx: '0', base-idx: '-1']
INFO[0002] Unpacking rootfs as cmd COPY . /app requires it.
INFO[00[45](https://gccsg.saint-maclou.com/nodejs/app-reseau-api/-/jobs/371#L45)] COPY . /app
INFO[0052] Taking snapshot of files...
INFO[0061] WORKDIR /app
INFO[0061] Cmd: workdir
INFO[0061] Changed working directory to /app
INFO[0061] No files changed in this command, skipping snapshotting.
INFO[0061] RUN yarn install --frozen-lockfile --production
INFO[0061] Initializing snapshotter ...
INFO[0061] Taking snapshot of full filesystem...
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: pod "runner-yrykheow-project-61-concurrent-0gkkzz" status is "Failed"
Description | Yes/No |
---|---|
Please check if this a new feature you are proposing |
|
Please check if the build works in docker but not in kaniko |
|
Please check if this error is seen when you use --cache flag |
|
Please check if your dockerfile is a multistage dockerfile |
|
Doubling memory request to 4Gi didn't help, so it doesn't appear to be OOM killed, I also tried 1.8.1 and 1.7.0, same result.
FYI, going back to single-stage works, so this is an issue with multi-stage.
Got another multistage Dockerfile that is crashing, unfortunately that one can't easily be converted to single stage.
FROM node:16 as builder
COPY . /app
WORKDIR /app
RUN yarn install --frozen-lockfile
ARG VITE_HIDE_INTERNAL
ARG VITE_HIDE_TRY_IT
ENV VITE_HIDE_INTERNAL=$VITE_HIDE_INTERNAL
ENV VITE_HIDE_TRY_IT=$VITE_HIDE_TRY_IT
RUN yarn build
FROM flashspys/nginx-static
COPY --from=builder /app/build /static
EXPOSE 80