cli
cli copied to clipboard
[Bug] Docker pull/push to the same registry changes image digest
Description
Docker pull/push to the same registry changes image digest pushed by another tool. looks like docker reformats manifest the way it like.
The bug breaks docker trust sign
process as digest is already used in other systems.
Combined with session nature docker login
makes it hurts any CI process badly, as only docker push
can be used for consistent pushing\signing
Describe the results you expected: My understanding is that docker push\pull to the same repo have to leave docekr digest the same.
Steps to reproduce the issue:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
curl -L https://github.com/GoogleContainerTools/jib/releases/download/v0.8.0-cli/jib-jre-0.8.0.zip -o jib.zip && unzip jib.zip
./build.yaml
apiVersion: jib/v1alpha1
kind: BuildFile
from:
image: scratch
layers:
entries:
- name: scripts
files:
- properties:
filePermissions: 755
src: build.yaml
dest: /build.yaml
jib-0.8.0/bin/jib build -c `pwd` -b build.yaml -t registry://localhost:5000/test --allow-insecure-registries --image-metadata-out=meta.json
$ cat meta.json
{"image":"localhost:5000/test","imageId":"sha256:8a3b3ebf46ea72d4c4859e18147485204d7144e3e82dd5b6416b167af9f25a86","imageDigest":"sha256:40b9754190108932d81362bc2b845fb6176b20d5ac99312abf2a1f149309e39e","tags":["latest"]}
$ docker --version
Docker version 20.10.12, build e91ed57
$ docker pull localhost:5000/test
Using default tag: latest
latest: Pulling from test
cf51c3f692e5: Pull complete
Digest: sha256:40b9754190108932d81362bc2b845fb6176b20d5ac99312abf2a1f149309e39e
Status: Downloaded newer image for localhost:5000/test:latest
localhost:5000/test:latest
$ docker push localhost:5000/test
Using default tag: latest
The push refers to repository [localhost:5000/test]
77ef8d169f22: Layer already exists
latest: digest: sha256:340a8be0c6d5d9ec54e49f484a88a3a92290ad8ade703da654265cafa3c8e315 size: 52
The docker imageDigest is changed from: sha256:40b9754190108932d81362bc2b845fb6176b20d5ac99312abf2a1f149309e39e to: sha256:340a8be0c6d5d9ec54e49f484a88a3a92290ad8ade703da654265cafa3c8e315
@thaJeztah could I kindly ask you to help me with bug report review? If I'm not wrong, it could be the incompatibility between docker cli version due to formatting changes.
What's the diff in the manifest? Could you do an inspect of the manifest on the first push and the second?
docker buildx imagetools inspect --raw localhost:5000/test
@thaJeztah formatting and field order.
$ docker buildx imagetools inspect --raw localhost:5000/test > meta_registry.json
diff meta_registry.json meta.json
1,16c1
< {
< "schemaVersion": 2,
< "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
< "config": {
< "mediaType": "application/vnd.docker.container.image.v1+json",
< "size": 358,
< "digest": "sha256:8a3b3ebf46ea72d4c4859e18147485204d7144e3e82dd5b6416b167af9f25a86"
< },
< "layers": [
< {
< "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
< "size": 229,
< "digest": "sha256:cf51c3f692e54f42033483875a084a5ff8ace3f69c2e44a8d29525fa4e07741c"
< }
< ]
< }
\ No newline at end of file
---
> {"image":"localhost:5000/test","imageId":"sha256:8a3b3ebf46ea72d4c4859e18147485204d7144e3e82dd5b6416b167af9f25a86","imageDigest":"sha256:40b9754190108932d81362bc2b845fb6176b20d5ac99312abf2a1f149309e39e","tags":["latest"]}
\ No newline at end of file
upd:
$ jib-0.8.0/bin/jib build -c `pwd` -b build.yaml -t registry://localhost:5000/test --allow-insecure-registries --image-metadata-out=meta.json
$ docker pull localhost:5000/test
$ docker buildx imagetools inspect --raw localhost:5000/test > meta_registry_after_docker_pull.json
$ docker push localhost:5000/test
$ docker buildx imagetools inspect --raw localhost:5000/test > meta_registry_after_docker_push.json
$ diff meta_registry_after_docker_pull.json meta_registry_after_docker_push.json
1c1,16
< {"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"sha256:8a3b3ebf46ea72d4c4859e18147485204d7144e3e82dd5b6416b167af9f25a86","size":358},"layers":[{"mediaType":"application/vnd.docker.image.rootfs.diff.tar.gzip","digest":"sha256:cf51c3f692e54f42033483875a084a5ff8ace3f69c2e44a8d29525fa4e07741c","size":229}]}
\ No newline at end of file
---
> {
> "schemaVersion": 2,
> "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
> "config": {
> "mediaType": "application/vnd.docker.container.image.v1+json",
> "size": 358,
> "digest": "sha256:8a3b3ebf46ea72d4c4859e18147485204d7144e3e82dd5b6416b167af9f25a86"
> },
> "layers": [
> {
> "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
> "size": 229,
> "digest": "sha256:cf51c3f692e54f42033483875a084a5ff8ace3f69c2e44a8d29525fa4e07741c"
> }
> ]
> }
\ No newline at end of file
@thaJeztah sorry for being buzzer. Could you please take a look into the case? It seems that now i posted compete info. Thanks!
Hmm.. right, so it looks like the manifest that was pushed got reformatted (pretty-print), instead of preserving the original without whitespace (probably that was generated with a library to create canonical JSON, as described in the image-spec (ISTR that was largely abandoned at some point, as canonical JSON was still open to too many variations.
@justincormack @tonistiigi any ideas?
@thaJeztah "manifest that was pushed got reformatted (pretty-print), instead of preserving the original without whitespace" exactly and it is quite scary behaviour as image signing process is totally broke. Another potential consequence - broken compatibility if docker cli will change formatting.
@justincormack @tonistiigi what do you think?
The way push is implemented in dockerd atm always creates new manifests on push. Immutability is only guaranteed in the image config layer. For compressed layers, they are kept as is if they already exist in the registry, otherwise they are compressed into new blobs.
@tonistiigi but dockerd overrides the same (logically, same entries, formatting and order changed) manifest. Shouldn't it compare it by content? upd: @justincormack @thaJeztah sorry for disturbing you any news on the issue?
Run docker pull registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.1.1
v1.1.1: Pulling from ingress-nginx/kube-webhook-certgen
ec52731e9273: Pulling fs layer
b90aa28117d4: Pulling fs layer
ec52731e9273: Verifying Checksum
ec52731e9273: Download complete
b90aa28117d4: Verifying Checksum
b90aa28117d4: Download complete
ec52731e9273: Pull complete
b90aa28117d4: Pull complete
Digest: sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
Status: Downloaded newer image for registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.1.1
registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.1.1
The push refers to repository [docker.io/***/registry.k8s.io_ingress-nginx_kube-webhook-certgen]
ce7a3c1169b6: Preparing
c0d270ab7e0d: Preparing
c0d270ab7e0d: Layer already exists
ce7a3c1169b6: Layer already exists
v1.1.1: digest: sha256:78351fc9d9b5f835e0809921c029208faeb7fbb6dc2d3b0d1db0a6584195cfed size: 739
I run into the same issue. It prevents deterministic container images, to a certain degree.
My plan was to compare the digest of the one image built on the CI, that was automatically pushed to the registry to one image built locally - and then, if they match, to use sign the image with cosign.
My workaround for now is to print the digest on the CI too and then compare it with the local build, before adding the cosign.
use the following,
crane cp <src> <dest>
it will not change the image digest!
there is no problem to push image to the sever. Jib do it without a problem and docker itself. The checksum is changed while we sign the image. Push\Pull here is to demonstrate problem (setup sign process is a lot more troublesome).
Please don't list tools that can interact with docker registry (one can do it with curl
) it doesn't fix the issue.
use the following,
crane cp <src> <dest>
it will not change the image digest!
There is no such thing as crane cp
~ crane cp --help
crane: error: expected command but got "cp", try --help
~ crane
usage: crane [<flags>] <command> [<args> ...]
Lift containers with ease - https://michaelsauter.github.io/crane
Looks like this is a different crane
CLI https://github.com/google/go-containerregistry/tree/main/cmd/crane
This is still an issue. I don't understand that this doesn't bother more people. Even other CLIs have this problem like crane
, see https://github.com/google/go-containerregistry/issues/1922.
It's not possible with graphdrivers due to how they store layers. You can use the containerd image store to get consistent digests though.
@vvoland nonsense, if digest depends on manifest, it should not be touched. In other words implementation should not affect the contract. if implementation breaks contract its clear bug.
The digest of the image depends on the bytes of the image as a whole IMO. If you change anything, like reording keys in the mainfest.json for example, adding an additional tag or anything else, the digest of the image will change. I guess it is pretty hard to store the image to disk without altering anything within since it is not as simple as downloading a tar from a webserver. It is pulling the manifests and layers seperately and putting it all together afterwards to store it as tar. On the way are many things that might change based on the libraries used for unmarshalling or marshalling jsons or how the tar is built.
The manifest is not stored at all, it's recreated at push. Also layers are stored in the uncompressed form, while the manifest references the digest of the compressed content. So there's still no guarantee that if you tar and compress the extracted layers they will actually have the same digest as the original content.
True, I guess storing it as TAR is not what I want. :)
The digest of the image depends on the bytes of the image
that is exactly what I said, if manifest is changed than digest is changed
The manifest is not stored at all...
and it was 2nd sentence, if implementation doesn't align to contract, its buggy one and have to be fixed.
There is no plan to fix the graphdriver backed image store at this time.
If this matters for you, please enable the containerd image store in your docker daemon config. Though, currently, that comes with the warning that multiplatform manifests can't be pushed, that's being worked on and is a problem with Docker's API and not containerd specifically.
multiplatform manifests can't be pushed
Small clarification on that one: they can be pushed, it's just that full image content is required locally (so you need to pull all the platform locally first), or in the remote registry repository (registry-dependent, but pushing to the same repository that the image was pulled from should still work).