kaniko
kaniko copied to clipboard
Image built with Kaniko claims to be OCI but in reality is not
Actual behavior
Coming from https://github.com/containers/buildah/issues/3668
I am using kaniko to build an image based on an OCI-image.
The base image has the following manifest: ( notice mediaType: application/vnd.oci.image.layer.v1.tar+gzip
)
>>> skopeo inspect --raw docker://${BASE_IMAGE} | jq .
{
"schemaVersion": 2,
"config": {
"mediaType": "application/vnd.oci.image.config.v1+json",
"digest": "sha256:61679cc9cfe1e3c757bfe2ff01222e25a4e0349ff70739f7c982df4e9484d5a4",
"size": 419
},
"layers": [
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"digest": "sha256:3bc51580eb8a78b645a88f9c89c0779be50944543b917c682af93035002c2d99",
"size": 79650768
}
]
}
If I use this image as a base for another image built with Kaniko, I get the following resulting image:
{
"schemaVersion": 2,
"mediaType": "",
"config": {
"mediaType": "application/vnd.oci.image.config.v1+json",
"size": 906,
"digest": "sha256:da250df73fc4c57e758739f562f7c5bf77703f0547951a254a6043719ccb35a6"
},
"layers": [
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"size": 79650768,
"digest": "sha256:3bc51580eb8a78b645a88f9c89c0779be50944543b917c682af93035002c2d99"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 222,
"digest": "sha256:8962e548f920af01274257418f3414570b5d0761524773a86df7835344e467f7"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 198,
"digest": "sha256:2b36e6ccfa17b538ef83c8aff96ffe823966883f90b4517b35346b57cc642c46"
}
]
}
Which claims to be application/vnd.oci.image.config.v1+json
but indeed has Docker application/vnd.docker.image.rootfs.diff.tar.gzip
layers.
This shows as an error when going to use the child image as a base image in podman build
, which shows (again see issue https://github.com/containers/buildah/issues/3668):
Error: error creating build container: error preparing image configuration: error converting image
"containers-storage:[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev]@04ee292f7a5549e765c99205acc567738a09eb084409cd71f6600facd3743c51"
from "application/vnd.oci.image.manifest.v1+json" to "application/vnd.docker.distribution.manifest.v2+json":
Unknown media type during manifest conversion: "application/vnd.docker.image.rootfs.diff.tar.gzip"
Expected behavior
As @vrothberg suggests, the layers should be converted to OCI ones during build or when pushing to the registry.
To Reproduce
Steps to reproduce the behavior:
- Have a base image with
"mediaType": "application/vnd.oci.image.config.v1+json",
- Use the base image to build another (multistage) image with Kaniko.
Additional Information
- Dockerfile
Unfortunately it is quite difficult to find a public image that has "mediaType": "application/vnd.oci.image.config.v1+json"
, but to build one the following can be achieved with podman
:
>> cat Containerfile
FROM docker.io/alpine
RUN touch file.txt
RUN echo "hello world"
>> podman build -t base-image -f Containerfile .
>> podman push base-image path/to/remote/repo/base-image
Dockerfile for child image:
FROM ubuntu:20.04 as installer
ADD installer.sh .
RUN bash installer.sh
######################
FROM path/to/remote/repo/base-image
COPY --from=installer /opt/application /opt/application
RUN ln -s /opt/application/1.0.0 /opt/application/stable
CMD ["/bin/bash"]
- Build Context Please provide or clearly describe any files needed to build the Dockerfile (ADD/COPY commands)
>>> cat installer.sh
mkdir -p /opt/application/1.0.0
touch /opt/application/1.0.0/file.txt
touch /opt/application/1.0.0/file2.txt
touch /opt/application/1.0.0/file3.txt
- Kaniko Image (fully qualified with digest)
Triage Notes for the Maintainers
Description | Yes/No |
---|---|
Please check if this a new feature you are proposing |
|
Please check if the build works in docker but not in kaniko |
|
Please check if this error is seen when you use --cache flag |
|
Please check if your dockerfile is a multistage dockerfile |
|
Further investigations...
This seems to happen regardless of the intermediate image being Multistage/Single stage.
So to trigger the bug, all is needed is to build an intermediate image with Kaniko that has the following Dockerfile:
> cat Containerfile
FROM $BASE_IMAGE_WITH_MEDIA_TYPE_OCI
RUN mkdir /path
RUN touch /path/file.txt
And then
> skopeo inspect --raw docker://{IMAGE} | jq .
{
"config": {
...
"mediaType": "application/vnd.oci.image.config.v1+json",
...
},
"layers": [
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"size": 2896369,
"digest": "sha256:3c4e9198e8c15669838fa75b9fde03039cc4a256d6868d214d966bd8f27b093d"
},
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"size": 209,
"digest": "sha256:d95bec0a2faf35ce091c8b575e61cd11e955f0d3a32444d9f55b3b49972ad6ab"
},
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"size": 42,
"digest": "sha256:4ca545ee6d5db5c1170386eeb39b2ffe3bd46e5d4a73a9acbebc805f19607eb3"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 134,
"digest": "sha256:2b75492cbc3ed2684c79591ef48f2421f15ad9244d1bb197b41287f1e7edac12"
}
],
"annotations": {
...
}
Really nice investigation! We have a similar issue, when it comes to import images build to Kaniko into Harbor. One image can be imported, another one does not. Images are build on GitLab CI, published to GitLab registry and can be used without issue by podman 3.
We find kaniko a very efficient way of building images and it would be great if we could keep using it.
Thanks for investigating! We have images built with Kaniko that can be run by Podman 3.3.1 on CentOS Stream 8 but not pushed to another registry. The error message is slightly different, however it shows it can't handle the format.
$ skopeo inspect --raw docker://<kaniko built image> | jq .
{
"schemaVersion": 2,
"mediaType": "",
"config": {
"mediaType": "application/vnd.oci.image.config.v1+json",
"size": 2229,
"digest": "sha256:1d9384a1e8cf5636c4c525b02d3eb2c5e2a6300987717cf00b798db54aabd955"
},
"layers": [
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"size": 79848180,
"digest": "sha256:78846bc60c09f099fe07532fd402aff11d9704b96310c7bcf0a7ee20085774a1"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 334659526,
"digest": "sha256:e8430b5e4f32476399aedbebd2bce8e31406d9f53e61bce5149b0f77fc3e11df"
},
...
When trying to push:
$ podman push <kaniko built image>
Getting image source signatures
Copying blob 59a6252e61ff [--------------------------------------] 0.0b / 0.0b
Error: creating an updated image manifest: preparing updated manifest,
layer "sha256:b208550f9bbbc279a3ea5d174d86f9aadd31da9ae83f3496e3e92d2ea864cef6":
unsupported MIME type for compression: application/vnd.docker.image.rootfs.diff.tar.gzip
Full disclosure, me, @LajosCseppento and @remivoirin all work in the same organization, but on different projects and teams, it just happens that we found the same issue in similar timeframes.
@crisbal For us this is present at least since July 2021, when Remi & co. tried to import our image to Harbor.
Podman was rejecting the image built with Kaniko because of the mix of "oci" and "docker" layers inside a general "oci" image (declared mediaType).
As mentioned in https://github.com/containers/buildah/issues/3668#issuecomment-1071007448 it's possible to circumvent the issue by pushing the base image with the "docker" mediaType, therefore new docker layers created on top of the base image with Kaniko will be accepted by Podman.
This is done using podman push -f v2s2 <image>
.
Have there been any new discoveries on this issue? We just started running into this same issue; trying to add layers to an OCI image with Kaniko makes the image unusable by docker and podman.
Have there been any new discoveries on this issue? We just started running into this same issue; trying to add layers to an OCI image with Kaniko makes the image unusable by docker and podman.
@mn132 what we ended up doing on our side was "walking around the problemi" by publishing/republishing the images we need in non-OCI format.
If you are pushing OCI images with podman look into the --format option for podman push
https://docs.podman.io/en/latest/markdown/podman-push.1.html#format-f-format
I have the same issue with podman build
using a base image built by Kaniko. Similarly to other affected users, I can run the kaniko-built images with podman run
but I cannot use them to build new images with podman build
:
# podman version 4.1.1 on Fedora36
podman build -t <image>:<tag> .
STEP 1/2 FROM <kaniko-built-image>
Error: error creating build container: error preparing image configuration: resetting recorded compression for "..."
unsupported MIME type for compression: application/vnd.docker.image.rootfs.diff.tar.gzip
FWIW c/image ≥ 5.22.0 is rather more tolerant of unknown MIME types. That should show up in Podman soon, currently there is https://github.com/containers/podman/releases/tag/v4.2.0-rc3 .
I’m afraid I can’t now spare the time to test the full scenario, to see if Podman can fully consume (or possibly even correct) these images, or whether that avoiding this failure just runs into another problem soon after.
With the Ubuntu 22.04 images now in OCI format, this has become a pressing issue for us. I found a workaround: use --no-push
and have Kaniko just write to a tar file (--tarPath
). Then docker image load
that, and let Docker do the push; that seems to result in a valid manifest.
Same approach appears to work with Podman.
Is this getting any traction? Otherwise we're going to have to move away from kaniko
We probably have the same issue as in #2392
Side note: This affects usage of https://docs.snyk.io/integrations/ci-cd-integrations/snyk-ci-cd-integration-deployment-and-strategies/snyk-container-specific-ci-cd-strategies#running-pipeline-if-a-docker-daemon-is-not-available
If the following circumstances exist:
- You containerize each build task but do not mount the Docker socket for security and performance reasons.
- Pipeline tasks are split across hosts (or even clusters) and rely on artifacts to be handed off though a central volume or intermediate registry/object store.
- You work exclusively in an ecosystem that only uses OCI-compliant container images.
from my poking around it looks like gcr's "tarball.LayerFromFile" is being used to convert the snapshots taken at every RUN to the layers: https://github.com/GoogleContainerTools/kaniko/blob/61312a95ae1a305f5154f8d88dc58ee1e3259ae8/pkg/executor/build.go#L520
this method is "deprecated" in that lib but apparently is still there. there was an abandoned PR to switch to its preferred replacement: https://github.com/GoogleContainerTools/kaniko/pull/449
anyways, it seems like what's needed here is for kaniko to pass in WithMediaType => types.OCILayer whenever its working FROM a known-OCI image, because the default is types.DockerLayer: https://github.com/google/go-containerregistry/blob/cd7761563a00fb38bb6d2126fc0f262fb6e64db1/pkg/v1/tarball/layer.go#L237
I just pinned all my Dockerfiles' references to ubuntu:* images back to the -20221130 tags because that's the last one published in the docker format and still works properly with both Kaniko and RedHat tools (podman, buildah, quay, etc)
- https://explore.ggcr.dev/?image=ubuntu%3Ajammy-20221130
- all docker mediaTypes, fully compatible
- https://explore.ggcr.dev/?image=ubuntu%3Ajammy-20230126
- all oci mediaTypes, but kaniko turns images based on this into image:oci + dockertyped layers that breaks redhat tools because they are/were picky about this
We encountered the same problem. Is there any progress on this?
@kfix I tried your branch (https://github.com/kfix/kaniko/tree/fix_mismatched_oci_layers), but I get the same error message, when pulling the image with docker.
mediaType in manifest should be 'application/vnd.docker.distribution.manifest.v2+json' not 'application/vnd.oci.image.manifest.v1+json'
@michaelkebe I think you're having a different problem there? I could not reproduce that with Docker 20.10.21.
I managed to get the integration tests running for my tweak and confirmed a before-and-after difference with integration/dockerfiles/Dockerfile_test_issue_1836
starting with FROM ubuntu:jammy
# build the thing
apt-get install golang docker.io
make
# run the thing
LOCAL=1 DOCKERFILE_PATTERN="Dockerfile_test_issue_1836" make integration-test-run
# check the thing
skopeo inspect --no-creds --tls-verify=false --raw docker://localhost:5000/kaniko-test/kaniko-dockerfile_test_issue_1836:latest | jq .
# see the mismatched OCI & docker layers as previously reported
{
"schemaVersion": 2,
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"config": {
"mediaType": "application/vnd.oci.image.config.v1+json",
"size": 2596,
"digest": "sha256:e106f558f4005779dcd1514f02507c1570cd2ae9a9d93d20956ac9f5abcc2ce6"
},
"layers": [
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"size": 29533950,
"digest": "sha256:2ab09b027e7f3a0c2e8bb1944ac46de38cebab7145f0bd6effebfe5492c818b6"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 327,
"digest": "sha256:a50e9e46f6d7aa93205a220342a62fe85fb40c215f6344d5a270811524d15e05"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 338,
"digest": "sha256:5e6db6c7987c6aa7453897af53a754cc6cf77d002229e370398e1ace0c8b6dcc"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 328,
"digest": "sha256:6a92f48e43aaf6796372807ac317183189b9e62fcaedfd5e2f41f5523b8b5752"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 295,
"digest": "sha256:77bfbc9a2181bbd26821e28f4d8b9c7ad12b17b3eb738985da896e0774399cd6"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 265,
"digest": "sha256:5ff684da2230aa257b3a0e50e1b39b6a94fdb37bb59a1e20c66fd9cbbe58752c"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 337,
"digest": "sha256:a8615c74319aa093a3f54e38b3b570e9b7856605107964c73cbe3badc0906026"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 261,
"digest": "sha256:f9b9642a08fdebb8ce672e8b373ddc3f8b0c77d6eb19d75eccee2d3e71fab07f"
}
]
}
# rebuild with my changes and rerun tests & skopeo, get a uniformly OCI'd manifest
{
"schemaVersion": 2,
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"config": {
"mediaType": "application/vnd.oci.image.config.v1+json",
"size": 2596,
"digest": "sha256:8a5d586f6b5fe07ea5fa14d26ce5c842546870299ac052867ac94facb193606a"
},
"layers": [
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"size": 29533950,
"digest": "sha256:2ab09b027e7f3a0c2e8bb1944ac46de38cebab7145f0bd6effebfe5492c818b6"
},
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"size": 333,
"digest": "sha256:b4463adfa1470965945a94a0ffbbeb0df575e5bd80ea222251804ea7bfe37b56"
},
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"size": 342,
"digest": "sha256:b91b0cfd7723a61decd18f418be4bec517b73a37dd1e7d7507a220325996a747"
},
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"size": 341,
"digest": "sha256:f52e21b42328f39a01e4f6bd47d8e0525ed3ff4d9c37cc7dd869e48578f3c655"
},
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"size": 297,
"digest": "sha256:045e6a36440c4e9c5db00247f3fceeff655f1a8026a9590e68a04b776f466cde"
},
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"size": 262,
"digest": "sha256:c8266b5304a03a82bc74c7680bd7f69114c95a1785adcac4260eab4d8ee02bdc"
},
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"size": 336,
"digest": "sha256:4e1129dc6d38f1f6396584ac72f05d3c7e60d6076a9367372bcca7d489799f07"
},
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"size": 262,
"digest": "sha256:184e3225ee4b0dd8d8f2129597b8ee06c90f1b4cd75ca64647cd283630df1162"
}
]
}
I was also able to pull the kaniko-built images from the integration-tests' repository into both Docker Engine and Podman 3.4. I could even run them with the mismatched layers but its only when trying to use the podman build
command that it'll fail unless my fix is applied.
# podman pull --tls-verify=false docker://localhost:5000/kaniko-test/kaniko-dockerfile_test_issue_1836:latest
# cat Containerfile
FROM localhost:5000/kaniko-test/kaniko-dockerfile_test_issue_1836:latest as kko
RUN uname -a
# podman build .
STEP 1/2: FROM localhost:5000/kaniko-test/kaniko-dockerfile_test_issue_1836:latest AS kko
Error: error creating build container: error preparing image configuration: error converting image "containers-storage:[btrfs@/mnt/btrfs/joey/podman/storage+/run/user/1000/containers]@e106f558f4005779dcd1514f02507c1570cd2ae9a9d93d20956ac9f5abcc2ce6" from "application/vnd.oci.image.manifest.v1+json" to "application/vnd.docker.distribution.manifest.v2+json": Unknown media type during manifest conversion: "application/vnd.docker.image.rootfs.diff.tar.gzip"
# rebuild & retest kaniko with fix and repull test image
$ podman build .
STEP 1/2: FROM localhost:5000/kaniko-test/kaniko-dockerfile_test_issue_1836:latest AS kko
STEP 2/2: RUN uname -a
Linux e9418095f3be 5.15.0-56-generic #62-Ubuntu SMP Tue Nov 22 19:54:14 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
COMMIT
--> 05014cd9542
05014cd95424140279e513ccbeb3a1d20618a39bd52185c46846b26eff89ce66
I say "tests running" because they appear to be a bit busted. for OCI specifically by https://github.com/GoogleContainerTools/container-diff/pull/390, fixed as a part of #2425 I'm hoping.
I think I'm going to have to wait on that stuff before attempting to PR this.
I say "tests running" because they appear to be a bit busted. for OCI specifically by GoogleContainerTools/container-diff#390, fixed as a part of #2425 I'm hoping.
I think I'm going to have to wait on that stuff before attempting to PR this.
As soon as https://github.com/GoogleContainerTools/container-diff/pull/390 is merged, I'll change #2425 to use that commit and have green tests again, if you want to test in the meantime you can rebase on top of #2425, install the fixed container-diff and run the integration tests locally or in the CI of your fork (again, note that currently #2425 still loads the old broken container-diff, you'd need to amend that in hte make file until that fix is merged)
I say "tests running" because they appear to be a bit busted. for OCI specifically by GoogleContainerTools/container-diff#390, fixed as a part of #2425 I'm hoping.
I think I'm going to have to wait on that stuff before attempting to PR this.
CI fix has been merged, although in the end the container-diff fix was not part of it, since it seems unmaintained. You will have to find a better way to test OCI images (directly using google/go-container-registry might allow you to write much tighter unit tests instead)
@kfix Was there any progress on this now that the Tests are somewhat back working?
Same issue here. I'm trying to build an image to use in Fedora Silverblue. rpm-ostree expects a purely OCI image. The base image is OCI, but the Kaniko adds Docker layers :/
Still a problem, some progress on this issue would be greatly appreciated :)
@kfix By chance do you have time to try and walk this over the finish line now that tests are back up again?
@BronzeDeer Yes, I've wrapped up my house move and its very hot outside so maybe I'll chill and attempt to write test code in Go
Hey guys, where did this ultimately end up?
Just encountered the same problem. Any progress or workaround would be appreciated. 🙏
Also following. Hoping for a fix soon :crossed_fingers:
Thanks @loganprice !