compose
compose copied to clipboard
Hooks to run scripts on host before starting any containers
This is clearly a common problem lots of people have been facing (even since 2014, #468), there's pile of closed issues for similar functionality to be added before and they have been closed, I believe, entirely un-reasonably.
Please see #1341 for a very concise argument as to why this functionality is useful, and judging by the reactions to most of the comments, it is quite a popular feature the community would like added.
Now it's over 2 years since #1341 was closed I believe hook-like functionality should be reconsidered.
Is your feature request related to a problem? Please describe.
There are many examples in #1341 already but I'll add my most recent use case for this.
I have a number of containers that are spun up, using compose, for development which require a shared data directory. I also need to access that directory on my host. Inside each of my containers a Python program is started as a specific user (as to mimic production as accurately as possible). Currently I mount this volume on each of my containers in docker-compose like so:
volumes:
- "/tmp/data-var:/var/data"
However /tmp/data-var doesn't exist on my host (this is a shared development project), so it's created by docker for me, as root. Therefore my Python programs, running as non-root, cannot write to it.
Before docker-compose up starts any containers, I'd like to call something like mkdir /tmp/data-var && chown +w /tmp/data-var. Then on docker-compose down after all containers are destroyed I'd like to remove the temp data directory rm -rf /tmp/data-var.
I understand this could be accomplished in other ways, please see the alternatives section below as to why these suck.
Describe the solution you'd like
I'd like to have two bash scripts, say pre-up.sh and post-down.sh and add them to be called via docker-compose with something like the following in my docker-compose.yml
version: "3"
pre-up: "./pre-up.sh"
post-down: "./post-down.sh"
services:
service1:
build: .
volumes:
- "/tmp/data-var:/var/data"
Other possible hooks people might find useful:
post-up: Called after containers have all startedpost-stop: Called after containers have been stopped (either with ^C or docker-compose stop)pre-down: Called before destroying containers and networks (withdocker-compose down)- etc.
When calling these, compose should block at the specified point until the script has returned with an exit code of 0, and itself stop with a non-zero exit code if the script exits with a non-zero code.
Describe alternatives you've considered
There are alternatives for my example use case, and equally good reasons they're a bad fit.
1. Calling a script on container start
I could have an ENTRYPOINT ["start.sh"] which sets the correct permissions on the directory, then my Python run command be specified via CMD ["python", ...] and have start.sh finally call exec "$@". However this is a waste as the first container to get started, and every container restart after would do the same thing, it only needs to be done once before any containers start.
Equally, it wouldn't solve my post-down: "./post-down.sh" use case.
2. Wrapping it up in a different script
I could write a wrapper script that calls docker-compose up, as that's been suggested many times in other issues. Come on... we're all using compose because it's concise, neat, tidy and simple to use. Everything is specified in one place which makes it easy for beginners to understand and read what's going on. Compose itself is essentially a standard when using docker with multiple containers.
3. Compose events
Though my understanding of events is lacking, due to how complex it is for what I really want to do. This is a poor way to achieve the goal I described, just like many other issues that were raised, then all pointed to #1510 (compose events). Events are reactive, this needs to be proactive, but more importantly, events do not block, and for many people, like me, blocking is essential.
Good luck... with kubernetes. Docker inc doesnt care.
Volume permission is very common issue with docker, and as long as you use bind mounts you even tell the engine "I'm in charge for this one, just expose inside container" and get into obvious permission issues. Using named volumes, which are created with owner set to the first container to use them, would help solve this issue.
What you describe as a proposed solution is pretty comparable to Kubernetes init containers, this is something we should consider for a future version of docker-compose. Main constraint is that compoes file format is not only used by docker-compose so such a move will require some coordination with docker stack command
and compose-on-kubernetes.
cc @chris-crone
Would be nice to run different scripts before start any services, in my case I need create some folders and depends of the service I want to start.
@jdiegosierra Can't you just do this in entrypoint script?
@TomaszGasior I mean I have to create folders into my host. And they dependeds of the service. As far I understand how entrypoint.sh works, it is for run some commands into de container right?
@jdiegosierra If you need to create directories inside directories shared between host and container, you can create them inside container's entrypoint by wrapping original entrypoint into your own.
@TomaszGasior Yeah I know :D But is not my case...
My case is I'm sharing the folder of a project with the container except the dist folder. I want the container has its own dist folder and my host project its own dist folder in order to develop using docker or without it. So in my docker-compose I have this:
volumes:
- ../../frontend:/opt/app
- ../../frontend/dist
- /opt/app/dist <-here is the problem
Also in my docker image I have this:
RUN groupadd appuser
RUN useradd -r -u 1001 -g appuser appuser
... build stuff
RUN chown appuser:appuser /opt/app -R
Into my container the dist folder has the appuser permissions and its builded files so its okey for me. The problem is outside the container. docker.-compose has created a folder called dist with root permissions so if i want to build my project in my host i cant because permissions. However if I create dist folder in my host with appuser permissions before start docker-compose all works as I want and dist folder in my host is empty with appuser permissions so I can build my project also in my host and it doesnt conflict with the dist folder into the container.
docker-compose has created a folder called dist with root permissions so if i want to build my project in my host i cant because permissions
As I understand, what you need is to create directory from inside of container's entrypoint but with permissions like it would be created from your host. If there is any directory inside your container with permissions from your host, you may want to use stat and chown.
Let's see my example. I have PHP application. composer, PHP package manager, creates vendor directory with all app dependencies. I want to run it from container entrypoint but with permissions like it would be ran from my host. Check it out: https://github.com/TomaszGasior/RadioLista-v3/blob/bf5692d3d767afcfa7c1ccf46109c4f653c85b1c/container/php-fpm/entrypoint.sh#L8
Basically what I am doing here is I ran composer command with permissions (user and group) the same as parent folder has. You may take some inspiration from this. For example, you may create folder with mkdir, then change its permissions by chown to permissions of different directory of your project which is created on your host — you can get them using stat.
Is not exactly what I need. I need to create a empty folder in my host called dist with appuser permissions before start the service in order to docker-compose doesnt create a dist folder with root permissions.
btw, I appreciate your help :)
Is not exactly what I need. I need to create a empty folder in my host called dist with appuser permissions before start the service in order to docker-compose doesnt create a dist folder with root permissions.
Or you may let docker create that folder with root permissions and then inside your entrypoint just fix that wrong permissions using method which I described. :) It's possible if you have any other directory created in host by your host user, available inside container. Just stat the second one and chown first one. Your host user don't have to exist inside your container — chown will accept non-existing user/group ID returned by stat.
Something like: chown $(stat -c '%u:%g' /from-host) /in-container.
/from-host — directory created by your user in host OS, /in-container — directory created by dockerd with wrong permission.
@jdiegosierra If you need to create directories inside directories shared between host and container, you can create them inside container's entrypoint by wrapping original entrypoint into your own.
Just a side thought - why should I research what is the ENTRYPOINT of the original image in order to prepend/append my own set of runtime commands? Shouldn't it be easier? Something like:
BEFORE_ENTRYPOINT echo "before"
AFTER_ENTRYPOINT echo "after
So if the original ENTRYPOINT is:
ENTRYPOINT echo "original"
Then the next container that uses this container would have the following value for ENTRYPOINT:
echo "before"
echo "original"
echo "after"
I have a script that reads docker-compose.yaml and makes adjustments based on a few factors, then writes it back out. I basically want to filter docker-compose.yaml and pass an adjusted version to docker-compose. So far there is no elegant way to do this. A set of pre-up and post-down scripts would do nicely.
@Ibsardar Gave me an idea. For my application service I used this:
version: "3"
services:
myapp:
entrypoint: ./bin/entrypoint
And I typically have bin scripts like console, server, and test which are docker-compose run myapp ./bin/_test wrappers. So this technique via the below bin/entrypoint file was a nice way for me to do some pre work before running the other scripts which now remain unchanged.
#!/bin/sh
# Do some blank AWS environment checking, etc...
# Run the orig script. Server, console, test, etc.
$@
Would also be great if it could set environment variables in the up script. And then later on use these to further configure the services.
Maybe this is useful to some folks: https://github.com/jvasile/docker-wrap
It lets you define a pre-up in docker-compose.yml. Patches welcome!
Would also be great if it could set environment variables in the up script. And then later on use these to further configure the services.
That would be a great feature to have officially.
@Ibsardar Gave me an idea. For my application service I used this:
version: "3" services: myapp: entrypoint: ./bin/entrypointAnd I typically have
binscripts likeconsole,server, andtestwhich aredocker-compose run myapp ./bin/_testwrappers. So this technique via the belowbin/entrypointfile was a nice way for me to do some pre work before running the other scripts which now remain unchanged.#!/bin/sh # Do some blank AWS environment checking, etc... # Run the orig script. Server, console, test, etc. $@
I think this issue is about adding hooks to run scripts before starting any containers, but the entrypoint option is to run scripts after containers started.
@jdiegosierra If you need to create directories inside directories shared between host and container, you can create them inside container's entrypoint by wrapping original entrypoint into your own.
Just a side thought - why should I research what is the
ENTRYPOINTof the original image in order to prepend/append my own set of runtime commands? Shouldn't it be easier? Something like:BEFORE_ENTRYPOINT echo "before" AFTER_ENTRYPOINT echo "afterSo if the original
ENTRYPOINTis:ENTRYPOINT echo "original"Then the next container that uses this container would have the following value for
ENTRYPOINT:echo "before" echo "original" echo "after"
why is this NOT A THING? Like wtf...
+1 - I have more use cases if anybody needs them. I really like the idea of following the pattern of init containers so things can be reused in k8s and my app logic can be rid of bulky startup logic.
If we go the hook route, I'd like to suggest that all hooks are scoped under a "hooks" key which can be global, per-profile, and per-service:
version: '3'
hooks:
// global hooks
__profiles__:
dev:
// profile hooks
services:
bar:
profiles:
- dev
hooks:
// service hook
I'd also like to suggest better semantics as pre-up and post-up are a bit ambiguous (eg. I'd expect post-up to happen when the container is no longer up - I point to package.json script semantics for this confusion). Maybe consider something like:
pre-build- runs before build (works forbuildandup --build)pre-run- runs after build, before running (works forupandrun)running- runs when the container is runningpre-down- runs before the container is taken down (before SIGINT is sent to the app)post-down- runs after the container is down
In my use case, I need to write the Dockerfile dynamically (depending on the setup of the machine who wants it) before docker-compose builds the image. I'm currently using a bash script that envelops the creation of the Dockerfile (using another bash script) and the images build/containers launch (using docker-compose).
It would be nice to do it using a pre-build, like is mentioned in the message above. That's because I'm launching this procedure together with many other procedures (on many computers) with Ansible. All the other procedures have their launch possible using only docker-compose. Only this particular procedure needs to be run with an envelop bash script, which is annoying.
I forked this project and added the feature of HOOK:
https://github.com/fly-studio/docker-compose
It supported
- Run command before/after starting containers
- Global hook
- Scoped hook for service
I forked this project and added the feature of HOOK:
https://github.com/fly-studio/docker-compose
It supported
* Run command before/after starting containers * Global hook * Scoped hook for service
Anybody merging it? People asking for it since 2015
I'm in favor for container hooks, comparable to Kubernetes lifecycle hooks, as this could cover many use cases, like the typical "intialize database with dataset". Anyway that should be discussed under the compose specification first see https://github.com/compose-spec/compose-spec/issues/84
I'm way more reticent for global pre-run scripts, especially running those on host: this brings both security and portability concerns.
I created a proposal on this topic (at least, partially) feel free to comment/suggest changes https://github.com/compose-spec/compose-spec/pull/289
I created a proposal on this topic (at least, partially) feel free to comment/suggest changes compose-spec/compose-spec#289
Your PR runs within the container. This is more for on the host
Having the hooks run -outside- the container is more useful. Having pre/post is also more useful than “on”.
it adds the use cases:
- restart another container when a container is recreated [ie: a container with a “network: container” (like a vpn container) is upgraded, it requires a stop/start of other containers which also use the same network.. or those services lose network connectivity and can’t simply be restarted]
- use “docker compose exec” to run a command within the same container that just started [ie: a minor permission change on a file after start or the use cases of the original poster]
- Use “docker compose exec” to run a command within another container [ie: add to a list of running services (index page, home page, monitoring system) on service start of a container]
- open a firewall port via “ufw allow” on the host system to allow access to the port
- Use curl to notify a monitoring system that a container is started without mucking with the container.
Overall doing it out of the container gives you options of both in and out of the container. Hooks should happen on start, stop, restart, and maybe for completeness, remove/create. Hooks should happen on the host system (and using docker exec could be run within this or another container).
Hooks should return 0 on success. Pre-startup hooks should prevent startup if non-zero. Post-startup should throw a warning only. All stop hooks should throw a warning only. Hooks only within the container and not on the host is just an incomplete implementation.
Would also like to see a pre_start/post_start, pre_stop/post_stop, and pre_restart/post_restart vs the ambiguous on_ commands. Lots of use cases when run on the host system.
I'm aware my proposal only partially address this issue. it is heavily inspired by kubernetes lifecycle hooks, which is a proven solution
I'm 👎 on running local commands from compose, as this would break portability for compose files. The usages you listed are obviously legitimate usages, but those should be addressed with a distinct approach.
I'm 👎 on running local commands from compose, as this would break portability for compose files. The usages you listed are obviously legitimate usages, but those should be addressed with a distinct approach.
I’m, on the other hand, very for it. It’s an extremely useful addition. It’s also what this thread/issue is about, as it explicitly says “host” in this issue.
again, I'm not saying https://github.com/compose-spec/compose-spec/pull/289 intent is to fully support this feature request. This just sounds to me a reasonable addition on this topic. About running host commands, I won't support this feature request. Maybe other maintainers will see some benefits.
Your PR runs within the container. This is more for on the host
I agree that this issue is more about setting up the host, such as creating folders that will be mounted by containers if they dont exist, currently I keep a separated script and have to upload to the machine both the docker-compose.yml file and docker-pre-compose.sh, not sure about how would this fit with the containers lifecycle tho, since theres no lifecycle for the group of containers
eg. I have a setup script and I need to run that only once (first time run of the compose file) but if I add another container and modify the setup script I would need this to run again.
Maybe it would make sense to allow having the script defined on the docker-compose.yml file but leave when to run it to the user with a separate command such as docker-compose setup
@benitogf such "setup" scenario can also be supported using an entrypoint script. Up to you to make this one idempotent, so running it after first command doesn't impact the deployment
Alternatively, you can also define an "init container", i.e. a container that will ru before your application container(s), doing all required setup. Application container with a depends_on: condition: service_completed_successfully directive will only start after this script completed.