A way of mounting a directory where files from the container overwrite host files with `docker compose watch`
Description
Hey.
First of all, it seems like there already was a similar issue, but it lacked context to understand why it's important to have this implemented in some way. Which is why I'm creating another issue, sorry: https://github.com/docker/compose/issues/11658
Our app
Our app is a PHP app and uses Composer package manager, but all of this is also relevant for NodeJS apps as well. All packages are installed into /vendor directory, so the entire project structure looks something like this:
.
├── backend/
│ ├── src/
│ │ └── SourceFile.php
│ ├── vendor/
│ │ └── google/
│ │ └── api-client/
│ │ └── GoogleFile.php
│ ├── Dockerfile
│ ├── composer.json
│ └── composer.lock
└── docker-compose.yml
For development, each team member uses an IDE. The IDE uses files in backend/vendor/ to provide type information, auto-complete and to show sources of vendor packages whenever necessary. Moreover, since PHP is an interpreted language, sometimes we modify the files in backend/vendor/ directly to assist with debugging. Of course, any changes in backend/vendor/ are only ever done locally, during development and with full understanding that the changes are going to be gone when Composer re-installs dependencies.
docker-compose.yml
docker-compose.yml is only used for local development. Hence, it currently uses bind mounts to share the entire backend directory into the container:
services:
app:
build:
context: ./
dockerfile: ./backend/Dockerfile
args:
- CONTAINER_ENV=local
volumes:
- ./backend:/app
deploy:
replicas: 2
This works, but requires each developer to keep track of changes in Composer's lock files and run composer install (which installs dependencies into vendor) every time the lock file changes, by running something like docker compose run --rm -it app composer install or docker compose exec app composer install. It works this way:
- old dependencies are bind mounted from
/backend/vendorto/app/vendor - Composer downloads and modifies dependencies in
/app/vendor - bind mount syncs changes from
/app/vendorback to the host to/backend/vendor - other running containers see the changes on the host and propagate them too
Dockerfile
Locally, no project files are copied into the container; Dockerfile is just a base PHP image with some configuration
On production, the app runs on AWS Fargate (which means no way to mount anything), so we pre-build our application into a Docker image, with all Composer dependencies and project files.
This is how it looks:
ARG CONTAINER_ENV
FROM php:8.2.27-fpm-alpine3.21 AS base
COPY --from=composer:2.7.4 /usr/bin/composer /usr/local/bin/composer
WORKDIR /app
FROM base AS base-local
# Nothing here
FROM base AS base-production
COPY backend/composer.json /app/composer.json
COPY backend/composer.lock /app/composer.lock
RUN composer install
COPY backend/src /app/src
FROM base-${CONTAINER_ENV}
EXPOSE 22 80
CMD tail -f /dev/null
docker compose watch
Now, there are several services like these in our project. Each requires developers to keep track of lock files and re-run package managers whenever they change. This is inconvenient and creates a lot of situations that could have been avoided. It also completely means our production build works in an entirely different way from our local builds.
This is where docker compose watch helps - not only would it allow us to use the same (production) Dockerfile for all environments, but it would also eliminate all unnecessary movements developers currently have to make. So let's say we modify the above docker-compose.yml to include the watch configuration, and remove the volume:
services:
app:
build:
context: ./
dockerfile: ./backend/Dockerfile
deploy:
replicas: 2
develop:
watch:
- action: sync
path: ./backend
target: /app
ignore:
- backend/vendor/
- action: rebuild
path: backend/composer.lock
This works, but now developers no longer have access to backend/vendor on the host, meaning the IDE has no idea what dependencies are installed, and neither do developers. This is a problem.
Let's say we remove the ignore: [backend/vendor/] part. Still, backend/vendor/ is not synced back to the host if it didn't exist in the first place.
Okay, let's try adding the volume back, just for the vendor directory, and ignore it for watch:
services:
app:
build:
context: ./
dockerfile: ./backend/Dockerfile
volumes:
- ./backend/vendor:/app/vendor
develop:
watch:
- action: sync
path: ./backend
target: /app
ignore:
- backend/vendor
- app/vendor
- vendor
- action: rebuild
path: backend/composer.lock
Still broken. Now both the host and container have an empty vendor folder.
Summary
We need a way of syncing the backend/vendor folder between the host and the container, but for image built files to always overwrite the host contents.
AFAICT your last solution is close to a solution, you could rely on sync+exec watch action:
services:
app:
build:
context: ./
dockerfile: ./backend/Dockerfile
volumes:
- ./backend/vendor:/app/vendor
develop:
watch:
- action: sync
path: ./backend
target: /app
ignore:
- backend/vendor
- app/vendor
- vendor
- action: sync=exec
path: backend/composer.lock
target: backend/composer.lock
exec:
command: composer install
Anyway, I'm a bit confused with the initial state "This requires each developer to keep track of changes in Composer's lock files and run composer install (which installs dependencies into vendor) every time the lock file changes" - doesn't your IDE detect updates to lock file and suggest running this command ? This actually sounds like a local workflow automation issue, as source code is synced from upstream repo, vs a compose issue
Would love to try your solution, but it seems that sync+exec is available from docker compose v2.32, and the latest shipped docker compose with Docker for Mac currently is Docker Compose version v2.31.0-desktop.2.
It does seem like it will work, but it would also mean that composer install would run from scratch, without Dockerfile build-time cache, every time a container is up. I was looking for more of a native solution: with action: rebuild we could use a Dockerfile like this:
RUN --mount=type=cache,target=/root/.composer/cache composer install
It does seem like this could be extracted into a named volume and be mounted in docker-compose.yml, but it's still a bit more complicated than I would prefer :) Hope you get where I'm coming from.
IDE
The IDE does detect updates, and it does suggest running the install command. However:
- not all developers see or pay attention to those notifications, especially AQAs
- you still have to click those manually and wait, instead of just having the
docker compose watchrunning somewhere and it doing 99% of work - there's currently three "sub projects" (e.g. services that are all part of a single project, stored in a monorepo), and all three of them are updated quite frequently, which makes it 3x more likely that someone will miss the notification or forget about it. So developers currently have a script they run when switching branches or pulling changes from the remote, but it is still not ideal
And, most importantly, it still does not solve the issue of unifying different Dockerfile "strategies" we use for local and production deployment. Unifying those would be great in of itself, but it would also allow running additional commands during build on local environments as part of build process, which we currently cannot do because there are no project files during Docker build on local environments. They are only copied into a container with a volume, so we have to also run these commands after containers start separately.
I get where you're coming from. But docker compose watch seems like a perfect solution that would eliminate both a separate Docker build process for local, and eliminate any "manual" part from the entire developer experience. It would be seamless and not require any additional scripts or SDLCs :)
New Docker for Mac was released, so I tried the solution you suggested. Unfortunately it still does not work. I'm not sure if the "exec" portion is even executed the first time I do docker compose up --build --watch app. But even changing the composer.lock file manually to trigger the command still doesn't result in dependencies being installed, and the vendor folder is just empty. The terminal does not show any progress or logs that composer install would output, only this:
[+] Running 3/3
✔ app Built 0.0s
✔ Network compose-watch-test_default Created 0.1s
✔ Container compose-watch-test-app-1 Created 0.1s
⦿ Watch enabled
Attaching to app-1
So I'm not sure if composer install has ever even ran. For easier reproducibility, I've prepared a tiny repo: https://github.com/oprypkhantc/compose-watch-test
You can try doing docker compose up --build --watch app and hopefully see if there's still something I'm doing wrong, or if it's an issue with Docker Compose itself. Also, as I said in the above message, I'd still be nice not to use sync+exec, since Dockerfile already does that so it'd be perfect to just use rebuild on composer.lock change somehow :)
A similar issue I've stumbled upon, but it is also kind of relevant:
I want to set up a tool called Prettier in docker-compose.yml using a Docker image, without mounting or messing with node_modules at all. E.g. treat the image as a black box. It's working good with a setup like this:
services:
prettier:
image: jauderho/prettier:3.4.2-alpine
restart: no
command: --cache --cache-location=/work/storage/tmp/.prettier-cache --cache-strategy=content --log-level warn --write .
volumes:
- ./:/work
deploy:
replicas: 0
However, there's an issue with the IDE, where it requires you to have the prettier package installed in project scope in node_modules. In other words, it wants this:
services:
prettier:
image: jauderho/prettier:3.4.2-alpine
restart: no
command: --cache --cache-location=/work/storage/tmp/.prettier-cache --cache-strategy=content --log-level warn --write .
volumes:
- ./:/work
- ./node_modules:/var/lib/node_modules
deploy:
replicas: 0
But if I do that, the node_modules both inside the container and on the host are, expectedly, empty. To be clear: this is an issue with the IDE, and I've reported it on their end, but having some way of mounting a folder where container files takes precedence over host files and overwrites them on container up would be an okay workaround for now, but it doesn't seem to be possible.
Could it be considered to add a flag to bind mounts specifying this exact behaviour? E.g.:
services:
prettier:
image: jauderho/prettier:3.4.2-alpine
restart: no
command: --cache --cache-location=/work/storage/tmp/.prettier-cache --cache-strategy=content --log-level warn --write .
volumes:
- ./:/work
- type: bind
source: ./node_modules
target: /var/lib/node_modules
copy_from_container: true
deploy:
replicas: 0
That should solve both use cases. I understand that this isn't compose's concern, and I can request that feature on https://github.com/moby/moby side, but first I wanted to hear from you whether that would even be possible and how that'd play with docker compose watch.
A bind mount, by nature, replaces the container's filesystem on target path with the one from bind source. New files written by container will be actually created on host directly. If container image comes with some initial content for this mount path, it will just be hidden by the mount. This is how Unix mount works, there's no voodoo magic to be expected here, and the challenge is not about declaring a new attribute in compose.yaml but for docker engine to manage such a scenario that is contradictory to the core concepts
I see that this is the case with bind mounts. I just thought that Docker has access to the image and would be able to copy and replace the files from the image directly to the mount. This does not seem possible in user land. I understand this might not actually be possible or feasible, and I fully get that it looks like a crutch, not a proper solution. That's just the first thing that came in mind.
But also as you can see, this is a valid use case. I'm not sure how others utilize docker compose watch for development without a feature similar to this. Similar functionality has also been asked about on Stackoverflow as far back as 2017:
https://stackoverflow.com/questions/47664107/docker-mount-to-folder-overriding-content https://stackoverflow.com/questions/42848279/how-to-mount-volume-from-container-to-host-in-docker https://stackoverflow.com/questions/66724297/docker-compose-volume-copying-folder-from-docker-container-to-host-when-executi
So it seems that a named volume is closer to a solution than a bind mount, but still doesn't really work:
- you have to create a directory on the host for the volume manually
- there's no way to drop the volume on image change, or as a
watchaction
services:
app:
build:
context: ./
dockerfile: ./backend/Dockerfile
volumes:
- ./backend:/app
- type: volume
source: backend-vendor
target: /app/vendor
develop:
watch:
- action: rebuild
path: backend/composer.lock
cli:
image: composer:2.7.4
working_dir: /app
volumes:
- ./backend:/app
volumes:
backend-vendor:
driver: local
driver_opts:
type: none
o: bind
device: ./backend/vendor
Would named volumes maybe be a better starting point? Maybe something like:
services:
app:
build:
context: ./
dockerfile: ./backend/Dockerfile
volumes:
- ./backend:/app
- type: volume
source: backend-vendor
target: /app/vendor
develop:
watch:
- action: rebuild
path: backend/composer.lock
+ - action: host_exec
+ command: rm -rf backend/vendor
cli:
image: composer:2.7.4
working_dir: /app
volumes:
- ./backend:/app
volumes:
backend-vendor:
driver: local
driver_opts:
type: none
o: bind
device: ./backend/vendor
+ create_host_path: true
Still looks like a workaround, especially the host_exec. I understand that host_exec is likely never going to happen, but maybe that'll give you a better idea of what I'm trying to achieve. There may be other ways too :)
having some way of mounting a folder where container files takes precedence over host files and overwrites them on container up would be an okay workaround for now, but it doesn't seem to be possible.
Sounds achievable using:
services:
prettier:
image: jauderho/prettier:3.4.2-alpine
volumes:
- ./node_modules:/host/node_modules
post_start:
- command: cp /var/lib/node_modules /host/node_modules
doing so, when you use docker compose run prettier post_start hook will copy node_module content on host relying on bind mount
Closing due to inactivity Since we haven’t heard back in a few weeks, we’re closing this issue. Feel free to reopen or create a new one if you have more details to share.
@glours The issue is still relevant - please reopen, as I don't have the permission to do so. Nicolas has addressed one of the points, but it doesn't solve the problem at scale - unifying Docker build files for development and production by utilizing docker compose watch. It's simply a workaround solution for one of the cases.
unifying Docker build files for development and production by utilizing docker compose watch.
Not sure to get your point here, Compose watch is designed for development purpose. If you want to keep a main compose file for production and also have a different configuration during development process you can use override files to do so, and it will work with watch 🤔
@glours The point is not to unify docker-compose - we don't use that in production. Rather, it is to unify the Dockerfile itself. Currently we have one Dockerfile that is split (by using multi-stage builds and docker build args) into two, very different flows:
- one is for production, where sources are copied during build, then dependencies are installed and the application is built;
- and another one for development, where none of that is happening, because we're using a bind mount to make sure the files are synced both ways. But the problem with this approach is that we have to manually perform everything that would usually happen during a production build (like install dependencies, build the app etc) locally using scripts
We would love to unify both into a single flow in the Dockerfile (one from production) using docker compose watch, but doing so doesn't work well for a development environment where developers want to have files from the container synced back from the container (for the IDE and other tooling to index) on start and on "rebuild" events.