compose icon indicating copy to clipboard operation
compose copied to clipboard

Using scale number to adjust mounted volumes?

Open Sushisource opened this issue 7 years ago • 12 comments

I have a situation where I need to create many instances of a docker container, and they should have mounts that enable them to persist their data to different locations, so they aren't all running into each other. The containers are all worker agents for some non-docker-controlled server.

Essentially I need to be able to do something like this:

services:
    agent:
        volumes:
            - /mnt/dat/agent_${DOCKER_SCALE_NUM}:/data/agent

I found a few references when googling around to something like this, but all of them seemed to have a resolution along the lines of "you don't really want to do this" or "here's some other way to solve your problem that doesn't involve doing this".

I doesn't need to be a sequential number or anything like that - I just need some way to get unique mount points for each one.

Is there an existing way to do this? If not, is anything planned?

Thanks!

Sushisource avatar Mar 06 '17 22:03 Sushisource

I have the same use-case, in which I would like to have a variable number of elasticsearch containers, and have each of them write to their own volume, have their own distinct internal name, expose on distinct ports, etc. E.g. a docker-compose definition such as

  es:
    build: ./es/
    restart: unless-stopped
    environment:
      - node.name=es${DOCKER_SCALE_NUM}
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - es_${DOCKER_SCALE_NUM}:/usr/share/elasticsearch/data
    ports:
      - ${DOCKER_SCALE_NUM+CONSTANT}:9200
    networks:
      - elastic

which when running `docker-compose up --scale , I would expect to get 5 containers, 5 volumes with distinct names, 5 different expose ports (starting from some $CONSTANT, e.g. 9199 to make the first container hit the expected 9200 expose port or similar) and a single network they all connect to.

manniche avatar Jan 21 '20 08:01 manniche

no response yet and no workaround either?

ruimoliveira avatar Feb 22 '20 10:02 ruimoliveira

Seems like docker is not a suitable solution for such scenario - it heavily smells with orchestration, so it's a good fit for kubernetes or nomad

Would be nice to hear from devs if there are any plans and what is the recommended course of action when you need such kind of functionality...

let4be avatar Jun 21 '21 15:06 let4be

its a shame that this never got any answer, this seems like a needed part of --scale.

KingPin avatar Jul 23 '21 10:07 KingPin

I would also like to see this, is there any update/info from the docker team?

Sorry for tagging you but it seems that this may have been lost amongst the issues... @shin- @aanand @ulyssessouza @aiordache ?

Soneji avatar Aug 13 '21 10:08 Soneji

Seems like docker is not a suitable solution for such scenario - it heavily smells with orchestration, so it's a good fit for kubernetes or nomad

There's a pretty wide gap between copying a container a few times and an entire k8s cluster. This would mainly help with cleaning up copypaste and YAML anchors/references - you can almost do this already if not for the missing features proposed above.

lcsondes avatar Aug 14 '21 23:08 lcsondes

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Apr 16 '22 14:04 stale[bot]

bump

Soneji avatar Apr 17 '22 15:04 Soneji

This issue has been automatically marked as not stale anymore due to the recent activity.

stale[bot] avatar Apr 17 '22 15:04 stale[bot]

Also got this problem. The solution proposed in this ticket helps to solve.

dubrsl avatar May 20 '22 22:05 dubrsl

@glours any thoughts on this? It would be a really nice feature and so far no official response, despite getting around 30 upvotes here.

apacha avatar Aug 16 '22 13:08 apacha

bump

zckevin avatar Sep 01 '22 02:09 zckevin

bump, also have this same issue

ccmetz avatar Oct 03 '22 20:10 ccmetz