compose
compose copied to clipboard
docker-compose ups a bad build target with multiple compose files
Description of the issue
A bad build target in used on containers up when working with multiple compose files.
Seems like the image check doesn't consider the build.target
value.
Context information (for bug reports)
Output of docker-compose version
docker-compose version 1.25.4, build 8d51620a
docker-py version: 4.1.0
CPython version: 3.7.5
OpenSSL version: OpenSSL 1.1.1d 10 Sep 2019
Output of docker version
Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:22:34 2019
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:29:19 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683
Steps to reproduce the issue
- Use
Dockerfile
:
FROM node:13 AS builder
FROM builder AS development
CMD ["echo", "DEVELOPMENT"]
FROM builder AS ci
CMD ["echo", "CI"]
- Create 3 different docker compose files:
docker-compose.yml
:
version: '3.4'
services:
api:
build:
context: .
target: development
docker-compose-ci.yml
:
version: '3.4'
services:
api:
build:
context: .
target: ci
docker-compose-local.yml
:
version: '3.4'
services:
api:
build:
context: .
target: development
-
Run
docker-compose -f docker-compose.yml -f docker-compose-ci.yml up
. -
Then run
docker-compose -f docker-compose.yml -f docker-compose-local.yml up
.
Observed result
On the 1st run, it will use ci
build target and will execute echo CI
, which is expected.
On the 2nd run, it should use development
build target, but it still uses the previous ci
target build and executes the same ci command, which is NOT expected.
The same bug happens if you will run commands in reverse order.
Expected result
Should run the expected build target based on extended docker file.
Stacktrace / full error message
grigorii-duca:testapp greg$ docker-compose -f docker-compose.yml -f docker-compose-ci.yml up
Creating network "testapp_default" with the default driver
Building api
Step 1/5 : FROM node:13 AS builder
---> f7756628c1ee
Step 2/5 : FROM builder AS development
---> f7756628c1ee
Step 3/5 : CMD ["echo", "DEVELOPMENT"]
---> Running in 960536e7bd45
Removing intermediate container 960536e7bd45
---> 2a69c4326cad
Step 4/5 : FROM builder AS ci
---> f7756628c1ee
Step 5/5 : CMD ["echo", "CI"]
---> Running in c68aa47d3686
Removing intermediate container c68aa47d3686
---> 28cd629f5770
Successfully built 28cd629f5770
Successfully tagged testapp_api:latest
WARNING: Image for service api was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating testapp_api_1 ... done
Attaching to testapp_api_1
api_1 | CI
testapp_api_1 exited with code 0
grigorii-duca:testapp greg$ docker-compose -f docker-compose.yml -f docker-compose-local.yml up
Recreating testapp_api_1 ... done
Attaching to testapp_api_1
api_1 | CI
testapp_api_1 exited with code 0
grigorii-duca:testapp greg$
Additional information
MacOS
docker-compose -f docker-compose.yml -f docker-compose-ci.yml config
services:
api:
build:
context: /htdocs/combats/testapp
target: ci
version: '3.4'
docker-compose -f docker-compose.yml -f docker-compose-local.yml config
services:
api:
build:
context: /htdocs/combats/testapp
target: development
version: '3.4'
This is happening to me but with a different configuration. It's creating dangling images because when I target to prod
it creates the dev
stage as well, and this stage was previously created.
My docker file (I removed some info):
FROM node:12 as base
WORKDIR /server
COPY package.json ./
RUN npm install
COPY ./ ./
# ------------------------
FROM base as dev
ENV NODE_ENV=development
ENV APP_HOST="0.0.0.0"
EXPOSE 3000
CMD ["npm", "run", "dev"]
# ------------------------
FROM base as prod
ENV NODE_ENV=production
ENV PORT=5000
EXPOSE 5000
CMD ["npm", "start"]
My docker-compose (I removed some info too):
version: "3.7"
services:
server:
build:
context: ./server
dockerfile: Dockerfile
target: dev
volumes:
- ./server:/server
- /server/node_modules
container_name: newlit_server
depends_on:
- db
ports:
- 3000:3000
- 5000:5000
db:
image: mongo
container_name: newlist_db
ports:
- 27017:27017
volumes:
- mongodb_data_container:/data/db
volumes:
mongodb_data_container:
If I run docker-compose up
for the very first time. It will pull the node image and then will use it as a base for the dev
stage. The container is up and running fine.
base stage
... I skipped all the instalation of npm
dev stage
But then I face 2 situations when I change the target:
- If I change the
target
value toprod
, thedocker-compose up
doesn't rebuild the image based on that stage on the Dockerfile. Instead, it's setting up my container like it was in development mode (devstage). Doesn'tdocker-compose up
build (rebuild) the image if it doesn't exist (prod stage doesn't exist yet) and then run the container?.
rebuild dev stage
- When I run
docker-compose build
to avoid the step above. It rebuild the base image from cache, perfect!. Thedev
stage now is dangle because theprod
stage has the same label (Am I right why the dangle exist?). But when I see the terminal, it is building thedev
stage and then theprod
. It make sense If I think docker-compose build runs through all the Dockerfile. If that the case, it is inevitable don't go through thedev
stage.
images after all of that
Now. If I change again to dev
and rebuild the image with docker-compose up server --build
. It dangled the prev image (prod stage) and create again the dev
from scratch, not form the cache. so know i have more dangle images, and this cycle will never ends.
What I finally ended up doing was:
- Remove target from docker-compose.yml
- Create two images running
docker build ./server -t server-dev --target=dev
anddocker build ./server -t server-prod --target=prod
- Using these images on docker-compose.yml when I need them.
...
services:
server:
# for productions change it to server-prod
image: sever-dev
...
Hello Guys! I'm glad that it is a common problem because I started to lose my mind :D
Looks like docker-compose override is ignoring build: target argument. I thought the problem is in the cache, but not!
If you will specify a target (ex debug or base) in the main docker-compose.yaml file then everything will work as expected in depends on your target build: target: debug|base
But if you will specify the target in an override file (ex: docker-compose.debug.yaml) then this parameter is ignored!
In my case @bog-h i only work with one docker compose file. I'm using version 3.7
@DracotMolver you might be missing DOCKER_BUILDKIT=1
As of Docker 18.09 you can use BuildKit. One of the benefits is skipping unused stages, so all you should need to do is build with DOCKER_BUILDKIT=1 docker build -t my-app --target prod
https://stackoverflow.com/a/55320725/368144
@skyeagle That's interesting but even though it doesn't fix the issue of docker-compose not skipping the stage that is not in the target
param
It seems to keep the build.target
from whichever file you ran docker-compose build
on.
For example, say my docker-compose.yml
specifies target: dev
and my docker-compose.prod.yml
specifies target: bin
. If I now run docker-compose -f docker-compose.prod.yml build
and then docker-compose up
, it starts the bin
stage.
@aiordache Any word on this issue?
I too am encountering this issue. In my case, I have the two services in the same docker-compose.yaml
file:
services:
server:
image: registry.mydomain.com/algos_server:latest
build:
context: ./
dockerfile: algos_server.dockerfile
target: algos_server
command: manage.py runserver --noreload 0.0.0.0:8000
celery:
image: registry.mydomain.com/algos_server:latest
build:
context: ./
dockerfile: algos_server.dockerfile
target: algos_celery
command: celery -A algos.celery worker -l info
And I have the following in algos_server.dockerfile
:
FROM python:3.7.7-buster AS algos_base
# bunch of commands to build the docker image
FROM algos_base as algos_server
RUN echo "last build line in algos_server"
ENTRYPOINT ["/usr/local/algos/scripts/algos-server-entrypoint.sh"]
FROM algos_base as algos_celery
RUN echo "last build line in algos_celery"
ENTRYPOINT ["/usr/local/algos/scripts/algos-celery-entrypoint.sh"]
And for completeness, I have these two entry point files.
algos-server-entrypoint.sh
#!/bin/bash
set -e
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1 && pwd)"
# run migrations, import fixtures, etc... etc...
echo "Starting up the Algos server..."
exec "$@"
And in algo-celery-entrypoint.sh
I have:
#!/bin/bash
set -e
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1 && pwd)"
# run migrations, import fixtures, etc... etc...
echo "Starting up the Algos Celery worker..."
exec "$@"
Result
When I run docker-compose -f docker-compose.yaml build && docker-compose -f docker-compose.yaml up
I get this output:
Successfully built 590673ed935d
Successfully tagged registry.mydomain.com/algos_server:latest
WARNING: Some networks were defined but are not used by any service: algos_public
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Recreating algos_server_1 ... done
Recreating algos_celery_1 ... done
Attaching to algos_celery_1, algos_server_1
celery_1 | --------------------------------------------------------------------------------
celery_1 | Starting up the Algos Celery worker...
celery_1 | --------------------------------------------------------------------------------
server_1 | --------------------------------------------------------------------------------
server_1 | Starting up the Algos Celery worker...
server_1 | --------------------------------------------------------------------------------
Conclusion
For some reason docker-compose seems to be ignoring the target
directive in docker-compose.yaml
. I will try separating both services into their own docker-compose.yaml
file and see if that helps, but according to the docs for docker-compose` and multi-stage build docs I believe it should be executing each service's entrypoint, instead of executing the entrypoint for Algors Celery for both services.
I found a work around, which is to describe my two docker services in two separate docker-compose.yaml
files.
After I put algos_server
into algos-server-base.yaml
and algos_celery
into algos-celery-base.yaml
I get the following output, which is as I expected it to be.
server_1 | --------------------------------------------------------------------------------
server_1 | Starting up the Algos server...
server_1 | --------------------------------------------------------------------------------
celery_1 | --------------------------------------------------------------------------------
celery_1 | Starting up the Algos Celery worker...
celery_1 | --------------------------------------------------------------------------------
We've run into the original problem here and it appears that this is only a problem if you don't rebuild the target. Docker-compose is not detecting that the built container would be different in each run without re-building.
In the first run of docker-compose -f docker-compose.yml -f docker-compose-ci.yml up
, the system sees it doesn't have an image so it builds with the ci
target.
In the second run of docker-compose -f docker-compose.yml -f docker-compose-local.yml up
, since it's just trying to bring up the container (not build) it doesn't recognize that the already built image is not the same as what would get created with the different build target, and just brings up the existing image.
I think this is not too surprising given that the target is in the "build" area. It's a bit of a more advanced mode to go reevaluate everything about the compose targets during an up.
To solve this, just execute a build before the 2nd up to trigger docker-compose to reevaluate the images and rebuild with the wanted target.
% docker-compose -f docker-compose.yml -f docker-compose-ci.yml up
Creating network "docker-compose-target-bug-test_default" with the default driver
Building api
Step 1/5 : FROM node:14-slim AS builder
---> cdb457aa69ed
Step 2/5 : FROM builder AS development
---> cdb457aa69ed
Step 3/5 : CMD ["echo", "DEVELOPMENT"]
---> Running in 9431c3d0516b
Removing intermediate container 9431c3d0516b
---> 1e13634516d7
Step 4/5 : FROM builder AS ci
---> cdb457aa69ed
Step 5/5 : CMD ["echo", "CI"]
---> Running in 4e5a7801767e
Removing intermediate container 4e5a7801767e
---> fc8d5a1871f0
Successfully built fc8d5a1871f0
Successfully tagged docker-compose-target-bug-test_api:latest
WARNING: Image for service api was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating docker-compose-target-bug-test_api_1 ... done
Attaching to docker-compose-target-bug-test_api_1
api_1 | CI
% docker-compose -f docker-compose.yml -f docker-compose-local.yml build
Building api
Step 1/3 : FROM node:14-slim AS builder
---> cdb457aa69ed
Step 2/3 : FROM builder AS development
---> cdb457aa69ed
Step 3/3 : CMD ["echo", "DEVELOPMENT"]
---> Using cache
---> 1e13634516d7
Successfully built 1e13634516d7
Successfully tagged docker-compose-target-bug-test_api:latest
% docker-compose -f docker-compose.yml -f docker-compose-local.yml up
Recreating docker-compose-target-bug-test_api_1 ... done
Attaching to docker-compose-target-bug-test_api_1
api_1 | DEVELOPMENT
docker-compose-target-bug-test_api_1 exited with code 0
@eweidner For me, rebuilding does not fix it, and I'm not running multiple compose files at once. Do you think it's a different issue?
It seems to keep the
build.target
from whichever file you randocker-compose build
on.For example, say my
docker-compose.yml
specifiestarget: dev
and mydocker-compose.prod.yml
specifiestarget: bin
. If I now rundocker-compose -f docker-compose.prod.yml build
and thendocker-compose up
, it starts thebin
stage.
@probablykasper I don't see any details on your specifics so I'm not sure if your issue is different.
If you are seeing what DracotMolver is seeing then that may be a different issue. I cannot reproduce that in some of my local code that seems similar to theirs. My builds in that situation skips dev when I build prod.
I only reproduced the issue presented by the OP and doing a build in between worked for me.
I ran into this issue. TARGET2 CMD went backward to the previous build step where the TARGET1 was indicated and shouldn't have overwritten the CMD step.
so... Target2 overwrote Target1 Cmd. Target1 was Target2's base.
For us target:
not working in the first place.
docker file
build dev
build prod
adduser graphuser
docker-compoer.yml
service-test:
build:
target: dev
command: whoami
logs:
service-test: graphuser
Same issue here, it looks like the target
parameter is completely ignored.
With a Dockerfile
like:
FROM ubuntu:xenial AS builder
RUN ...
FROM builder AS builder-x64
RUN ...
FROM builder AS builder-x86
RUN ...
I've used explicitly:
version: '3.4'
services:
compiler:
build:
context: .
target: builder-x86
And it keep using builder-x64
.
Even pre-building targets (as was commented before) doesn't look like a solution, e.g.:
docker build --target builder-x64 .
docker build --target builder-x86 .
docker compose up
This is happening to me but with a different configuration. It's creating dangling images because when I target to
prod
it creates thedev
stage as well, and this stage was previously created.My docker file (I removed some info):
FROM node:12 as base WORKDIR /server COPY package.json ./ RUN npm install COPY ./ ./ # ------------------------ FROM base as dev ENV NODE_ENV=development ENV APP_HOST="0.0.0.0" EXPOSE 3000 CMD ["npm", "run", "dev"] # ------------------------ FROM base as prod ENV NODE_ENV=production ENV PORT=5000 EXPOSE 5000 CMD ["npm", "start"]
My docker-compose (I removed some info too):
version: "3.7" services: server: build: context: ./server dockerfile: Dockerfile target: dev volumes: - ./server:/server - /server/node_modules container_name: newlit_server depends_on: - db ports: - 3000:3000 - 5000:5000 db: image: mongo container_name: newlist_db ports: - 27017:27017 volumes: - mongodb_data_container:/data/db volumes: mongodb_data_container:
If I run
docker-compose up
for the very first time. It will pull the node image and then will use it as a base for thedev
stage. The container is up and running fine.base stage
... I skipped all the instalation of npm dev stage
But then I face 2 situations when I change the target:
- If I change the
target
value toprod
, thedocker-compose up
doesn't rebuild the image based on that stage on the Dockerfile. Instead, it's setting up my container like it was in development mode (devstage). Doesn'tdocker-compose up
build (rebuild) the image if it doesn't exist (prod stage doesn't exist yet) and then run the container?.rebuild dev stage
- When I run
docker-compose build
to avoid the step above. It rebuild the base image from cache, perfect!. Thedev
stage now is dangle because theprod
stage has the same label (Am I right why the dangle exist?). But when I see the terminal, it is building thedev
stage and then theprod
. It make sense If I think docker-compose build runs through all the Dockerfile. If that the case, it is inevitable don't go through thedev
stage.images after all of that
Now. If I change again to
dev
and rebuild the image withdocker-compose up server --build
. It dangled the prev image (prod stage) and create again thedev
from scratch, not form the cache. so know i have more dangle images, and this cycle will never ends.
What I finally ended up doing was:
- Remove target from docker-compose.yml
- Create two images running
docker build ./server -t server-dev --target=dev
anddocker build ./server -t server-prod --target=prod
- Using these images on docker-compose.yml when I need them.
... services: server: # for productions change it to server-prod image: sever-dev ...
This works for me right now, but what if I want to specify a target to build in docker-compose. yml? Version 3.4 is currently in use.
I had problems with this in github actions, but no in my local. I added DOCKER_BUILDKIT: 1
thinking that docker-compose it's just a wrapper of docker, and it worked.
Same issue here running docker-compose.yml
with a docker-compose.override.yml
file.
It worked with BuildKit enabled, as sugested by @infinito84
More about BuildKit: What is BuildKit? - Introducing BuildKit
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Same problem.
I've set:
version: "3.5"
services:
hsm:
build:
context: .
target: mytarget
But it's not running target
at all.
This issue has been automatically marked as not stale anymore due to the recent activity.
Same problem
Same problem. Current solution for me only to make single stage dockerfile. I've used next.js docker file and it doesn't wort https://github.com/vercel/next.js/blob/canary/examples/with-docker-multi-env/docker/production/Dockerfile
There's a few different things going on in this issue:
- If
target
isn't doing anything at all, it's likely that you're not building with BuildKit. Refer to @rafawhs's comment. - There is an issue where, when the target is changed in a compose file, such as:
services:
foo:
build:
context: .
target: devel
then running compose up
then changing the Compose file to:
services:
foo:
build:
context: .
target: ci
and running compose up
again, the image won't be rebuilt according to the new target. A quick fix to this is running compose up --build
, to force image rebuilding. This same issue is what was seen in the original issue, but with overwriting the target
attribute of the build section. This doesn't actually have anything to do with multiple Compose files, but with how/when images are built. After Compose has built the image for the service (with whatever target was defined at the time), the image won't be rebuilt in consequent up
s, unless specified.
Please comment (feel free to @ me) if there are other issues other than these.