docker-compose build and docker build lead to different IDs
These two issues has been closed for duplicity and then for staleness respectively.
- https://github.com/docker/compose/issues/5873
- https://github.com/docker/compose/issues/883
We don't have a right to reopen it while the problem persists, so creating a new issue
Copying my comment for emphasis
I can build the same image with THREE different hashes.
docker build . docker-compose up --build COMPOSE_DOCKER_CLI_BUILD=true docker-compose up --build This results in 3 fully functional docker images, all with different hashes.
I think it has something to do when using a COPY command on a directory.
I get cache hits for all of my Dockerfile steps until the first COPY command that is a directory (COPY commands that are individual files work and cache hit correctly)
For me it does not work even without COPY. When I have Dockerfile with two commands, where the second command fails, then the first command will always be executed again. This doesn't happen with common docker.
@knyttl Show the Dockerfile, please. And desirably, steps to reproduce.
I'm not sure about you, but for me different IDs presumably amount to docker-compose ignoring valid cache. If that's the case for you I'd update the title.
And the IDs are not content hashes:
Also worth noting that since these IDs are not content hashes, they will never generate the same ID unless its using cache even when the content is exactly the same.
My understanding is the changes are not caused by the files themselves, but by differences in the archives sent to docker (by COPY). docker-compose and docker use different tar implementations.
From what I can see the resolution (issue) was blocked by changes in docker. @shin-, the PR is merged, still a blocker?
Also, next year a python package, that uses the docker client directly (whatever that means), might become publicly available. Just spreading rumors :)
Also, next year a python package, that uses the docker client directly (whatever that means), might become publicly available. Just spreading rumors :)
compose already supports this; set the COMPOSE_DOCKER_CLI_BUILD=1 environment variable to use the native cli for building (and DOCKER_BUILDKIT=1 to use buildkit); can also bet set in the .env file https://github.com/docker/docker.github.io/blob/master/.env
Unfortunately it doesn't help in my case:
docker-compose-production-2.yml:
version: "3"
services:
nginx:
build:
context: .
dockerfile: docker2/Dockerfile2
target: nginx
php:
build:
context: .
dockerfile: docker2/Dockerfile2
target: php
docker2/Dockerfile2:
# 30
FROM node AS assets
WORKDIR /app
COPY package.json package-lock.json ./
COPY docker2 docker2
RUN npm i
FROM node as php
FROM node as nginx
$ docker image prune -f; COMPOSE_DOCKER_CLI_BUILD=1 docker-compose -f docker-compose-production2.yml build; echo -e '\a'
...
WARNING: Native build is an experimental feature and could change at any time
Building nginx
Sending build context to Docker daemon 12.93MB
Step 1/7 : FROM node AS assets
---> 969d445a1755
Step 2/7 : WORKDIR /app
---> Using cache
---> 8ca5d55207e2
Step 3/7 : COPY package.json package-lock.json ./
---> e34bb291de3a
Step 4/7 : COPY docker2 docker2
---> e5e2a5673096
Step 5/7 : RUN npm i
---> Running in ba90f021defe
added 1088 packages, and audited 1088 packages in 12s
14 vulnerabilities (5 low, 9 high)
To address all issues, run:
npm audit fix
Run `npm audit` for details.
npm notice
npm notice New patch version of npm available! 7.0.8 -> 7.0.14
npm notice Changelog: <https://github.com/npm/cli/releases/tag/v7.0.14>
npm notice Run `npm install -g [email protected]` to update!
npm notice
Removing intermediate container ba90f021defe
---> 2551b7161ee8
Step 6/7 : FROM node as php
---> 969d445a1755
Step 7/7 : FROM node as nginx
---> 969d445a1755
Successfully built 969d445a1755
Successfully tagged mangorv-backend_nginx:latest
Building php
Sending build context to Docker daemon 12.93MB
Step 1/6 : FROM node AS assets
---> 969d445a1755
Step 2/6 : WORKDIR /app
---> Using cache
---> 8ca5d55207e2
Step 3/6 : COPY package.json package-lock.json ./
---> Using cache
---> e34bb291de3a
Step 4/6 : COPY docker2 docker2
---> c66e5ed778a7
Step 5/6 : RUN npm i
---> Running in 0fd47a63b807
added 1088 packages, and audited 1088 packages in 12s
14 vulnerabilities (5 low, 9 high)
To address all issues, run:
npm audit fix
Run `npm audit` for details.
npm notice
npm notice New patch version of npm available! 7.0.8 -> 7.0.14
npm notice Changelog: <https://github.com/npm/cli/releases/tag/v7.0.14>
npm notice Run `npm install -g [email protected]` to update!
npm notice
Removing intermediate container 0fd47a63b807
---> 1aaa33dccaa3
Step 6/6 : FROM node as php
---> 969d445a1755
Successfully built 969d445a1755
Successfully tagged mangorv-backend_php:latest
npm i is executed twice.
The mistery is that vim has to keep docker2/Dockerfile2 open for it to reproduce.
The mistery is that vim has to keep docker2/Dockerfile2 open for it to reproduce.
Could it be it's writing a temp file to the docker2 directory? (https://stackoverflow.com/questions/607435/why-does-vim-save-files-with-a-extension)
you can try adding the dockerfile and such temp/swap files to .dockerignore
Does it make a difference if you enable buildkit?
I tried to dockerignore it, but apparently I mistyped. Indeed by default vim creates a swap files alongside every open file (a.txt -> .a.txt.swp). And for some reason on save it changes the swap file twice (mtime, ctime). Once immediately, and once after 8 seconds. (Actually the same happens if you change a file but don't save.) And I was able to reproduce my issue locally.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Still extremely relevant
This issue has been automatically marked as not stale anymore due to the recent activity.
still relevant
It's Dec 2021 and we still have this error?
No wonder people are abandoning the project for other container technologies.
Tried with both binaries: docker and docker-compose.
$ docker --version
Docker version 20.10.8, build 3967b7d
$ docker-compose --version
docker-compose version 1.29.2, build 5becea4c
BASE DOCKERFILE
- With
ONBUILD COPY, the cache is made on the first execution- However, the second execution it Won't invalidate the cache even when the file changes
ONBUILD COPY pubspec.* /usr/local/bin/app/
REbuilding with docker-compose
- It fails to clean the cache and keeps using the same exactly file as before, even if we change the file
Without --no-cache
- It correctly shows CACHED
$ docker compose build
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/supercash/flutter-web-app:latest 0.0s
=> [internal] load build context 0.3s
=> => transferring context: 375.77kB 0.3s
=> [1/1] FROM docker.io/supercash/flutter-web-app 0.0s
=> CACHED [2/1] WORKDIR /usr/local/bin/app 0.0s
=> CACHED [3/1] RUN echo "Copying the dependencies: pubspec.*" 0.0s
=> CACHED [4/1] COPY pubspec.* /usr/local/bin/app/ 0.0s
=> CANCELED [5/1] RUN flutter pub get
🐛 With --no-cache : Expected to use new file, but we can see same file as before
- The log shows correctly that the the execution is hot
- However, the file used during the build is the same as before on the first execution
$ docker compose build --no-cache
Building maceio-shopping-tickets-web
[+] Building 6.3s (8/14)
[+] Building 6.4s (9/14)
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/supercash/flutter-web-app:latest 0.0s
=> [internal] load build context 0.4s
=> => transferring context: 373.85kB 0.4s
=> [1/1] FROM docker.io/supercash/flutter-web-app 0.0s
=> CACHED [2/1] WORKDIR /usr/local/bin/app 0.0s
=> [3/1] RUN echo "Copying the dependencies: pubspec.*" 0.4s
=> [4/1] COPY pubspec.* /usr/local/bin/app/
@marcellodesales from your description, your issue looks unrelated to this ticket, which is about differences in the build implementation between the classic (python) compose implementation and native docker build. It's possible there's a bug in BuildKit, but make sure you're running the latest version of the Docker Engine (20.10.12 at time of writing, or 20.10.11 on Docker Desktop), in case it's fixed in a patch release.
I tried reproducing your issue, but wasn't able to (steps below), but if you have a way to reproduce the issue, which could depend on the base image or your build-context (e.g., if there's paths that use symlinks), please open a ticket in https://github.com/moby/buildkit/issues with the exact steps to reproduce the issue.
create an onbuild parent image;
mkdir repro-7905 && cd repro-7905
docker build -t onbuild -f- . <<'EOF'
FROM busybox
ONBUILD COPY pubspec.* /usr/local/bin/app/
EOF
Create a pubspec.one file, and build an image FROM that image:
echo "pubspec one" > pubspec.one
docker build -t foo -f- . <<'EOF'
FROM onbuild
EOF
Verify the file has the expected content:
docker run --rm foo cat /usr/local/bin/app/pubspec.one
pubspec one
Modify the pubspec.one file, and build again:
echo "pubspec one modified" > pubspec.one
docker build -t foo -f- . <<'EOF'
FROM onbuild
EOF
[+] Building 0.2s (7/7) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 55B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/onbuild:latest 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 59B 0.0s
=> CACHED [1/1] FROM docker.io/library/onbuild 0.0s
=> [2/1] COPY pubspec.* /usr/local/bin/app/ 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:2f7f4e0e0b38d51301af951c4216fc384ceb1e108bf69469fc559997dbd1e285 0.0s
=> => naming to docker.io/library/foo 0.0s
Verify the file has the expected content:
docker run --rm foo cat /usr/local/bin/app/pubspec.one
pubspec one modified
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Still relevant
This issue has been automatically marked as not stale anymore due to the recent activity.
sorry for the terrible delay without answer on this issue. Most of the examples shared in comments use Compose v1, with or without buildkit enabled. Compose v1 will be End of Life in a few weeks, and Compose v2 uses the same codebase as docker build to build image, which I expect would help solve this issue.
@marcellodesales As you already use docker compose build (i.e Compose v2) here, can you please confirm you get the same cache issue running docker buildx build --no-cache ... with your Dockerfile ?
closing as unclear and obsolete. If you encounter a comparable issue, please create a fresh new one with details on your environment