compose
compose copied to clipboard
docker-compose dynamic volume names for replicas
moved over from https://github.com/moby/moby/issues/43079
Tried some of the "solutions" from this thread and stackoverflow. No swarm, node or other hosts at all. Just trying to get it to work locally first without starting containers around docker-compose. Main idea is to get rid of these copies and just use replicas:
configsvr0:
image: mongo
command: mongod --configsvr --replSet configsvr --port 27017 --dbpath /data/db --keyFile /data/keyfile
volumes:
- configsvr0:/data/db
- ./keyfile:/data/keyfile:ro
configsvr1:
image: mongo
command: mongod --configsvr --replSet configsvr --port 27017 --dbpath /data/db --keyFile /data/keyfile
volumes:
- configsvr1:/data/db
- ./keyfile:/data/keyfile:ro
configsvr2:
image: mongo
command: mongod --configsvr --replSet configsvr --port 27017 --dbpath /data/db --keyFile /data/keyfile
volumes:
- configsvr2:/data/db
- ./keyfile:/data/keyfile:ro
volumes:
configsvr0:
configsvr1:
configsvr2:
See the difference? It's just counting up on the volume name...
It would be great to just have something like this:
configsvr0:
image: mongo
command: mongod --configsvr --replSet configsvr --port 27017 --dbpath /data/db --keyFile /data/keyfile
volumes:
- configsvr:/data/db
- ./keyfile:/data/keyfile:ro
volumes:
configsvr:
name: 'configsvr-{{.Task.Slot}}'
Tried this (see below) but it didn't work.
Note: Below I only copied small junks from my yml that I thought are important for this.
test 1 - volume name
would require to add entries under volumes so that's not really dynamic - but I gave it a try:
volumes:
- "configsvr{{.Task.Slot}}:/data/db"
➜ msc git:(develop) ✗ docker-compose up configsvr0
service "configsvr0" refers to undefined volume configsvr{{.Task.Slot}}: invalid compose project
test2 - dynamic name with task slot
that would be dynamic but still doesn't work though it was accepted in stackoverflow. That's actually my sample from top.
volumes:
- configsvr:/data/db
volumes:
configsvr:
name: 'configsvr-{{.Task.Slot}}'
➜ msc git:(develop) ✗ docker-compose up configsvr0
[+] Running 0/0
⠿ Volume "configsvr-{{.Task.Slot}}" Error 0.0s
Error response from daemon: create configsvr-{{.Task.Slot}}: "configsvr-{{.Task.Slot}}" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path
my env
mac monteray with docker desktop.
➜ msc git:(develop) ✗ docker -v
Docker version 20.10.11, build dea9396
➜ msc git:(develop) ✗ docker-compose -v
Docker Compose version v2.2.1
docker-compose.yml version: 3.9
(let me leave this comment here as well)
In general, I think that having some templating options added to the compose-spec would be a nice addition in addition to substituting environment-variables (I recall I started writing up some ideas, but never got round to completing those 😓).
That said, it will likely require some discussion, as the idea behind the compose-spec is to also make it portable enough to be useful in different scenarios (e.g. also to include deploy to ACI and ECR; https://docs.docker.com/cloud/ecs-integration/), so it may need to be looked into of all of those are "portable" enough to be used for such cases.
Hi, I see this issue is closed, is there an available solution to do this today ?
@mbogner is there a PR to go along with marking this as completed? I can't find any documentation on this feature, or when it will be available.
I haven't heard about any update. Can't remember why I closed the issue.
+1 for this feature. any updates on this?
just in case this helps anyone, at least in my case (Docker Swarm) it worked fine with something like @mbogner showed in the "test2 - dynamic name with task slot" approach from the original post. Here you have the example:
Considerations regarding template variables:
-
'{{.Service.Name}}_{{.Task.Slot}}'
produce volume names like:-
n8n_n8n-main_1
-
n8n_n8n-worker_1
-
n8n_n8n-worker_2
-
- If you use
{{.Task.Name}}
it will include the unique identifier of the task, so you will be generating a new volume each time you restart a container while updating the n8n version for instance - You have to reference them from the
services
with thevolumes
key, that is,n8n_data
in my case
Full example:
version: '3.7'
volumes:
n8n_data:
name: '{{.Service.Name}}_{{.Task.Slot}}'
redis_data:
x-common-env: &common-env
NODE_ENV: production
N8N_LOG_LEVEL: warn
EXECUTIONS_MODE: queue
QUEUE_BULL_REDIS_HOST: redis
DB_TYPE: postgresdb
# … (redacted for brevity)
services:
n8n-main:
image: n8nio/n8n:1.40.0
ports:
- "80:5678"
environment:
<<: *common-env
volumes:
- n8n_data:/home/node/.n8n
- /home/${user}/n8n-workflow_executions_shared_files:/home/node/n8n-workflow_executions_shared_files
deploy:
replicas: 1
# … (redacted for brevity)
n8n-worker:
image: n8nio/n8n:1.40.0
command: worker
environment:
<<: *common-env
# Specific configuration for workers
QUEUE_HEALTH_CHECK_ACTIVE: "true"
QUEUE_HEALTH_CHECK_PORT: 5678
volumes:
- n8n_data:/home/node/.n8n
- /home/${user}/n8n_workflow_executions-shared_files:/home/node/n8n_workflow_executions-shared_files
deploy:
replicas: 3
# … (redacted for brevity)
redis:
image: "redis:7.2.4"
volumes:
- redis_data:/data
deploy:
replicas: 1
# … (redacted for brevity)