kaniko
kaniko copied to clipboard
--reproducible flag massively increases build time
I am building one large docker image and I am experiencing a somewhat weird behavior that I would like to address.
Host info:
- Docker version: 20.10.12
- kaniko version 1.7.0
So we are talking about a large-ish image (~2.5GB) and when I built it with cache it is being built at ~1 min. So far so good, but the problem that I am facing is that this cached image is always creating a new artifact with a new sha256.
I found out that for that issue the --reproducible flag exists and it does exactly that. When I built the image with the --reproducible flag it doesn't create a new artifact with a new sha256 but the build time increases from ~1 min to ~7 mins. That's a huge overhead imho and I would like to figure that out.
I got logs (with trace verbosity) when the --reproducible flag was enabled and when it was not. The main difference that was found in the logs:
--reproducibleenabled
time="2022-03-02T12:36:05Z" level=debug msg="mapping stage idx 0 to digest sha256: ..."
time="2022-03-02T12:36:05Z" level=debug msg="mapping digest sha256 to cachekey ..."
time="2022-03-02T12:40:47Z" level=info msg="Pushing image to ..."
--reproducibledisabled
time="2022-03-02T12:29:12Z" level=debug msg="mapping stage idx 0 to digest sha256: ..."
time="2022-03-02T12:29:12Z" level=debug msg="mapping digest sha256 to cachekey ..."
time="2022-03-02T12:29:12Z" level=info msg="Pushing image to ..."
If you watch closely in the timestamps when --reproducible is enabled the stage mapping digest... is ~5 mins longer compared when the -reproducible is disabled.
Why is this behavior and why does it add so much overhead in the build process?
Is there any other way to use cache to build an image but not create a new artifact like docker does. --reproducible completely strips down the timestamps which provides a not useful output when running docker images.
e.g. In the below code block when using --reproducible the CREATED output is N/A which is not the desired behavior.
REPOSITORY TAG IMAGE ID CREATED SIZE
random-image latest abcdefghijk N/A 2.68GB
Another issue that was found is that when --reproducible is used on an image when running docker history for that image the output is not useful at all
IMAGE CREATED CREATED BY SIZE
<missing> 292 years ago 364B
<missing> 292 years ago 18.4kB
<missing> 292 years ago 5.69MB
<missing> 292 years ago 4.93kB
<missing> 292 years ago 12.5kB
<missing> 292 years ago 83.9kB
<missing> 292 years ago 0B
<missing> 292 years ago 222MB
<missing> 292 years ago 780B
<missing> 292 years ago 0B
<missing> 292 years ago 5.85MB
<missing> 292 years ago 5.1MB
<missing> 292 years ago 58.5MB
<missing> 292 years ago 127MB
<missing> 292 years ago 198MB
As you can see with the reproducible flag every useful information was lost from the docker history
For your second question, see here Using the flag, all timestamps are stripped off and no history is available.
Will have to dig deeper into performance issue. Can you clarify your use case a little bit more? What do you mean by "but the problem that I am facing is that this cached image is always creating a new artifact with a new sha256"
Thank you for your answer @tejal29 !
So this behavior to completely strip off the timestamps and the history should be expected. What I wanted to do was to build an image with kaniko using cache. I was expecting that if cache was used to build the image that it would not create a new artifact in my registry. That means that even if cache was used to create the image (so no update had been done) kaniko is creating a "new" image in the registry. That creates problems with subsequent caches and filling the registry with the same images. Docker for example when is using cache to build a docker image is not creating a new artifact with a new sha256 but kaniko did that.
Reproducible flag is not preferred because of the overhead in the build time andthe stripped down timestamps. We value those timestamps and the history of the image
Hey, just to add some voice to the issue, I am also seeing super slow builds with --reproducible, up to 7-8 times slower.
I suspect (but have no proof here) that it is because it takes some time to strip the timestamp metadata after the image is built, the image/layers needs to be extracted, changed and then repacked.
To be fair I have noticed that I use a very old version of Kaniko (Kaniko version : v1.3.0), will try to update it and see if it works better.
Ok I have tested and see that it still very slow with Kaniko version : v1.8.1,
Let me show an example:
# > cat Dockerfile
FROM quay.io/pypa/manylinux2014_x86_64
ENV FOO=BAR
Here with --reproducible (it takes 1 minute and 10 seconds)
# > /kaniko/executor --no-push --dockerfile Dockerfile --log-timestamp --destination tmp --tarPath ./tmp.tar --reproducible
INFO[2022-06-21T09:47:29Z] Retrieving image manifest quay.io/pypa/manylinux2014_x86_64
INFO[2022-06-21T09:47:29Z] Retrieving image quay.io/pypa/manylinux2014_x86_64 from registry quay.io
INFO[2022-06-21T09:47:30Z] Built cross stage deps: map[]
INFO[2022-06-21T09:47:30Z] Retrieving image manifest quay.io/pypa/manylinux2014_x86_64
INFO[2022-06-21T09:47:30Z] Returning cached image manifest
INFO[2022-06-21T09:47:30Z] Executing 0 build triggers
INFO[2022-06-21T09:47:30Z] Skipping unpacking as no commands require it.
INFO[2022-06-21T09:47:30Z] ENV FOO=BAR
INFO[2022-06-21T09:48:39Z] Skipping push to container registry due to --no-push flag
Here without, it takes 8 seconds
/kaniko/executor --no-push --dockerfile Dockerfile --log-timestamp --destination tmp --tarPath ./tmp.tar
INFO[2022-06-21T09:49:06Z] Retrieving image manifest quay.io/pypa/manylinux2014_x86_64
INFO[2022-06-21T09:49:06Z] Retrieving image quay.io/pypa/manylinux2014_x86_64 from registry quay.io
INFO[2022-06-21T09:49:07Z] Built cross stage deps: map[]
INFO[2022-06-21T09:49:07Z] Retrieving image manifest quay.io/pypa/manylinux2014_x86_64
INFO[2022-06-21T09:49:07Z] Returning cached image manifest
INFO[2022-06-21T09:49:07Z] Executing 0 build triggers
INFO[2022-06-21T09:49:07Z] Skipping unpacking as no commands require it.
INFO[2022-06-21T09:49:07Z] ENV FOO=BAR
INFO[2022-06-21T09:49:14Z] Skipping push to container registry due to --no-push flag
With kaniko-project/executor:v1.19.2-debug, building the same image:
- with
--reproducibleflag, build took 5m36s and use 7690Mb - without
--reproducibleflag, build took 1m38s and use 350Mb
Activing profiling (https://github.com/GoogleContainerTools/kaniko#kaniko-builds---profiling) I see lot of traces inflate/deflate with --reproducible flag :
kaniko.zip