cht-core icon indicating copy to clipboard operation
cht-core copied to clipboard

Arch v3 Docker Compose Nomenclature: dynamic, use of `COMPOSE_PROJECT_NAME` and no `container_name`

Open mrjones-plip opened this issue 2 years ago • 1 comments

Describe the issue When reviewing the new cht-core and cht-couchdb docker-compose files for archv3 currently published to staging.dev.medicmobile.org, I'd prefer not to use an explicit name for the networks and the volumes. These can be dynamically set within a project when using COMPOSE_PROJECT_NAME but only if they're not explicitly set in the compose file. This is a huge win for both multi-tenant and developer setups (which are often multi-tenant ; )

IE go from:

   networks: 
      cht-net: 
         name: cht-net 

to this

   networks: 
      cht-net: 

Which ever way we go with, declaring networks and volumes they should be uniform. The CouchDB container and the cht-core containers differ. (note extra - and lacking :). My vote is the former!

CouchDB:

   networks: 
      cht-net: 

CHT Core:

   networks: 
      - cht-net

While we consider using COMPOSE_PROJECT_NAME, I recommend not using container_name for each container. They will already get named after the outer most stanza value (the service) and in every case the container_name matches the service (eg: container_name: cht-nginx == cht-nginx: ). Intra-container lookups will not be affected.

Further, implicit nomenclature will simplify scenarios where we have service name <-> container name mismatches. For example, haproxy service defaults to using the docker DNS of couchdb as seen here - "COUCHDB_SERVERS=${COUCHDB_SERVERS:-couchdb} but the container is actually called something different (container_name: cht-couchdb). The reason this is currently working is that DNS works with both service name and container name.

Describe alternatives you've considered There may have been intentional design decisions that I don't appreciate that mandate sticking with the current nomenclature

mrjones-plip avatar Sep 16 '22 23:09 mrjones-plip

IMO this shouldn't block the 4.0.0 release - marking as low priority.

garethbowen avatar Sep 22 '22 23:09 garethbowen

This was a design decision to make sure docker-compose mapped containers correctly even when the containers were started by the cht-upgrade-service, but I've tested it today and it seems to work also when using project names.

I think it's a fairly simple change, and worth the exploration. Since this affects both CHT-Core and cht-upgrade-service, I think it's best if we decide on the direction now, rather than later and need everyone upgrade their upgrade-service as well.

dianabarsan avatar Sep 29 '22 13:09 dianabarsan

my vote is obvious given I filed this ticket (no explicit container, network and volume names) - however I'm happy to facilitate a real time discussion about this if that'd be helpful. As well, if it'd help, I think I could pull together a PR for this with a little poking around about where the source files are.

mrjones-plip avatar Sep 29 '22 14:09 mrjones-plip

Stumbled upon this good quote (emphasis mine):

Docker Compose normally assigns a container name based on its current project name and the name of the services: block. Specifying container_name: explicitly overrides this; but, it means you can’t launch multiple copies of the same Compose file with different project names (from different directories) because the container name you’ve explicitly chosen won’t be used.

You almost never care what the container name is explicitly. It only really matters if you’re trying to use plain docker commands to manipulate Compose-managed containers; it has no impact on inter-service communication. Just delete the container_name: line.

(For similar reasons you can almost always delete hostname: and links: sections if you have them with no practical impact on your overall system.)

mrjones-plip avatar Oct 03 '22 00:10 mrjones-plip

Unfortunately, I don't think we can't use the dynamically generated docker network. That's because, in self hosted environments, we have to run the cht-upgrade-service separately (a separate docker-compose file in a different folder, which we run separately: https://github.com/medic/cht-upgrade-service/blob/main/docker-compose.yml). For security purposes, we don't expose the upgrade service port to the host, so it would only be accessible from within the network (since we can't authenticate these requests). Is there a way that the cht-upgrade-service can still remain isolated from the host and accessible within the "cht" network?

dianabarsan avatar Oct 03 '22 05:10 dianabarsan

I'm going to try to use an environment variable to define the network first.

dianabarsan avatar Oct 03 '22 06:10 dianabarsan

Is there a way that the cht-upgrade-service can still remain isolated from the host and accessible within the "cht" network?

I think you hopefully answered your own question with:

I'm going to try to use an environment variable to define the network first.

But let me know!

mrjones-plip avatar Oct 03 '22 17:10 mrjones-plip

Is there a way that the cht-upgrade-service can still remain isolated from the host and accessible within the "cht" network?

tl;dr - if you start the upgrade service with the same COMPOSE_PROJECT_NAME as you did with your other containers, everything should just work. Mainly, services can be resolved by the outer most name (eg cht-nginx:) and no explicit name is needed (container_name: cht-nginx) as long as they're on the same network. Unauthenticated calls using the default service name (with out project name) should also work. Further, other compose instances on different networks (as defined by different COMPOSE_PROJECT_NAME values) will not be able to access other docker networks.

Background on this - I was testing when I was filing out #7832 and discovered that with a modified compose file and an env var of COMPOSE_PROJECT_NAME=testee you get a volumes of testee_cht-ssl and testee_cht-credentials, network of testee_cht-net and then containers to match (per docker ps --format '{{.Names}}'|grep test):

testee_couchdb_1
testee_cht-nginx_1
testee_cht-sentinel_1
testee_cht-api_1
testee_haproxy_1
testee_healthcheck_1

To fully reproduce my test environment:

grab this core compose file
version: '3.9'

services:
  haproxy:
    image: public.ecr.aws/s5s3h4s7/cht-haproxy:4.0.0-alpha
    container_name: cht-haproxy
    hostname: haproxy
    environment:
      - "HAPROXY_IP=${HAPROXY_IP:-haproxy}"
      - "COUCHDB_USER=${COUCHDB_USER:-admin}"
      - "COUCHDB_PASSWORD=${COUCHDB_PASSWORD}"
      - "COUCHDB_SERVERS=${COUCHDB_SERVERS:-couchdb}"
      - "HAPROXY_PORT=${HAPROXY_PORT:-5984}"
    logging:
      driver: "local"
      options:
        max-size: "${LOG_MAX_SIZE:-50m}"
        max-file: "${LOG_MAX_FILES:-20}"
    networks:
      - cht-net
    expose:
      - ${HAPROXY_PORT:-5984}

  healthcheck:
    image: public.ecr.aws/s5s3h4s7/cht-haproxy-healthcheck:4.0.0-alpha
    container_name: cht-haproxy-healthcheck
    environment:
      - "COUCHDB_SERVERS=${COUCHDB_SERVERS:-couchdb}"
      - "COUCHDB_USER=${COUCHDB_USER:-admin}"
      - "COUCHDB_PASSWORD=${COUCHDB_PASSWORD}"
    logging:
      driver: "local"
      options:
        max-size: "${LOG_MAX_SIZE:-50m}"
        max-file: "${LOG_MAX_FILES:-20}"
    networks:
      - cht-net

  cht-api:
    image: public.ecr.aws/s5s3h4s7/cht-api:4.0.0-alpha
    container_name: cht-api
    depends_on:
      - haproxy
    expose:
      - "${API_PORT:-5988}"
    environment:
      - COUCH_URL=http://${COUCHDB_USER:-admin}:${COUCHDB_PASSWORD:?COUCHDB_PASSWORD must be set}@haproxy:${HAPROXY_PORT:-5984}/medic
      - BUILDS_URL=${MARKET_URL_READ:-https://staging.dev.medicmobile.org}/${BUILDS_SERVER:-_couch/builds}
      - UPGRADE_SERVICE_URL=${UPGRADE_SERVICE_URL:-http://localhost:5100}
    logging:
      driver: "local"
      options:
        max-size: "${LOG_MAX_SIZE:-50m}"
        max-file: "${LOG_MAX_FILES:-20}"
    networks:
      - cht-net

  cht-sentinel:
    image: public.ecr.aws/s5s3h4s7/cht-sentinel:4.0.0-alpha
    container_name: cht-sentinel
    depends_on:
      - haproxy
    environment:
      - COUCH_URL=http://${COUCHDB_USER:-admin}:${COUCHDB_PASSWORD}@haproxy:${HAPROXY_PORT:-5984}/medic
      - API_HOST=cht-api
    logging:
      driver: "local"
      options:
        max-size: "${LOG_MAX_SIZE:-50m}"
        max-file: "${LOG_MAX_FILES:-20}"
    networks:
      - cht-net

  cht-nginx:
    image: public.ecr.aws/s5s3h4s7/cht-nginx:4.0.0-alpha
    container_name: cht-nginx
    depends_on:
      - cht-api
      - haproxy
    ports:
      - "${NGINX_HTTP_PORT:-80}:80"
      - "${NGINX_HTTPS_PORT:-443}:443"
    volumes:
      - cht-ssl:/root/.acme.sh/
    environment:
      - "CERTIFICATE_MODE=${CERTIFICATE_MODE:-SELF_SIGNED}"
      - "SSL_CERT_FILE_PATH=${SSL_CERT_FILE_PATH:-/etc/nginx/private/cert.pem}"
      - "SSL_KEY_FILE_PATH=${SSL_KEY_FILE_PATH:-/etc/nginx/private/key.pem}"
      - "COMMON_NAME=${COMMON_NAME:-test-nginx.dev.medicmobile.org}"
      - "EMAIL=${EMAIL:[email protected]}"
      - "COUNTRY=${COUNTRY:-US}"
      - "STATE=${STATE:-California}"
      - "LOCALITY=${LOCALITY:-San_Francisco}"
      - "ORGANISATION=${ORGANISATION:-medic}"
      - "DEPARTMENT=${DEPARTMENT:-Information_Security}"
    logging:
      driver: "local"
      options:
        max-size: "${LOG_MAX_SIZE:-50m}"
        max-file: "${LOG_MAX_FILES:-20}"
    networks:
      - cht-net

networks:
  cht-net:
    name: cht-net

volumes:
    cht-ssl:
        name: cht-ssl

grab this couchdb compose file
version: '3.9'

services:
  couchdb:
    image: public.ecr.aws/s5s3h4s7/cht-couchdb:4.0.0-alpha
    container_name: cht-couchdb
    volumes:
      - ${COUCHDB_DATA:-./srv}:/opt/couchdb/data
      - cht-credentials:/opt/couchdb/etc/local.d/
    environment:
      - "COUCHDB_USER=${COUCHDB_USER:-admin}"
      - "COUCHDB_PASSWORD=${COUCHDB_PASSWORD:?COUCHDB_PASSWORD must be set}"
      - "COUCHDB_SECRET=${COUCHDB_SECRET}"
      - "COUCHDB_UUID=${COUCHDB_UUID}"
      - "SVC_NAME=${SVC_NAME:-couchdb}"
      - "COUCHDB_LOG_LEVEL=${COUCHDB_LOG_LEVEL:-error}"
    restart: always
    logging:
      driver: "local"
      options:
        max-size: "${LOG_MAX_SIZE:-50m}"
        max-file: "${LOG_MAX_FILES:-20}"
    networks:
      cht-net:

volumes:
  cht-credentials:

networks:
  cht-net:
    name: cht-net

and then run this command: COMPOSE_PROJECT_NAME=testee COUCHDB_PASSWORD=password COUCHDB_USER=medic docker-compose -f docker-compose_cht-core.yml -f docker-comose_cht-couchdb.yml up -d it should just work!

mrjones-plip avatar Oct 05 '22 03:10 mrjones-plip

isolated from the host

I forgot to speak to this! It will be just as isolated (or just as not isolated) as if we didn't use the COMPOSE_PROJECT_NAME value. That is, if we expose: no ports, then the host won't able to get in (aside from the fact that it could launch a container on that same docker network - but I assume that's out of scope)

mrjones-plip avatar Oct 05 '22 03:10 mrjones-plip

Thanks @mrjones-plip I already went through the custom network yesterday but didn't have time to also add an update here. This is the pull request from cht-upgrade-service: https://github.com/medic/cht-upgrade-service/pull/17 I want to create a branch in cht-core where we run the upgrade e2e test against this version of cht-upgrade-service, and it it works, I'll send this to the presses :)

dianabarsan avatar Oct 05 '22 04:10 dianabarsan

if you start the upgrade service with the same COMPOSE_PROJECT_NAME

I'm reluctant to do this, especially because we want to treat the upgrade service and the CHT containers separately: started separately, upgraded separately. The docker compose files for cht-core and cht-upgrade-service need to be in different folders on disk. If someone wants to stop the cht-upgrade-service alone, and use docker-compose, they'd end up taking cht-core down as well.

The alternative is to have two additional environment variables that we pass to the upgrade-service when starting:

  • CHT_COMPOSE_PROJECT - project name to be used when starting CHT containers
  • CHT_NETWORK_NAME - the network that the cht-upgrade-service and the cht-core containers would share

Appreciate the feedback :)

dianabarsan avatar Oct 06 '22 06:10 dianabarsan

I'm reluctant to do this, especially because we want to treat the upgrade service and the CHT containers separately: started separately, upgraded separately

I might be miss-understanding the desired use case, but you can start the upgrade service in different directories with different COMPOSE_PROJECT_NAME values. Here's a video illustrating the name space protections:

https://user-images.githubusercontent.com/8253488/194377607-1a770d7e-627c-481a-a6b5-60a6790733b3.mp4

To me this removes the need for your proposed CHT_COMPOSE_PROJECT and CHT_NETWORK_NAME env vars

mrjones-plip avatar Oct 06 '22 17:10 mrjones-plip

with different COMPOSE_PROJECT_NAME values.

Your test proves my point exactly, where the "upgrader_upgrader_1" can't reach nginx over docker network. Only the upgrader containers that you started with the cht1 and cht2 project names were able to reach their corresponding nginx containers.

What happens when you do: COMPOSE_PROJECT_NAME=cht1 docker-compose down in the upgrader folder?

dianabarsan avatar Oct 06 '22 18:10 dianabarsan

What happens when you do: COMPOSE_PROJECT_NAME=cht1 docker-compose down in the upgrader folder?

Compose will only operate the action (up, down or restart) on the compose file you specify. In this case, since the compose file only has the upgrader service, it only affects that one container. You you see me down the upgrader, then up it and re-attach to the still running cht1 project:

https://user-images.githubusercontent.com/8253488/194396405-278fd70f-e811-41b8-a1a5-be34ac913fda.mp4

Noteworthy, if you try and do a -v (tear down all the related networks and volumes), it actually gives an error b/c the resources are still in use:

➜  upgrader COMPOSE_PROJECT_NAME=cht1 docker-compose down -v
Stopping cht1_upgrader_1 ... done
Removing cht1_upgrader_1 ... done
Removing network cht1_cht-net
ERROR: error while removing network: network cht1_cht-net id ceea7eb6a4a57c0f4a44a8c374098558489bfc46406bfc85093e45ea989de1c4 has active endpoints

mrjones-plip avatar Oct 06 '22 19:10 mrjones-plip

I'm trying the same experiment with docker-compose v2, and it's taking down both containers :( Could you please give that one a try as well?

https://user-images.githubusercontent.com/35681649/194805269-1a998766-3837-4873-bc9b-01fdeec7c7d1.mp4

dianabarsan avatar Oct 10 '22 05:10 dianabarsan

Ah hah! good catch! I do see that compose v2 is more aggressive when shutting down other containers in the project now.

I ...think this is still ok? That is, I think we want to have an instance of cht-upgrade running per instance of core, right?

mrjones-plip avatar Oct 12 '22 03:10 mrjones-plip

yes, but I don't think we want to not be able to take them down individually.

dianabarsan avatar Oct 12 '22 03:10 dianabarsan

I'm leaning towards not using the same project name, and keeping cht-upgrade-service separate.

dianabarsan avatar Oct 12 '22 03:10 dianabarsan

I haven't played enough with the upgrader docker stuffs to have an opinion on this just yet. I trust your judgement on how to best proceed.

mrjones-plip avatar Oct 12 '22 22:10 mrjones-plip

Ok, cool, thanks for all the support. I have two PRs ready that add the CHT_COMPOSE_PROJECT and CHT_NETWORK_NAME, in both cht-upgrade-service and cht.

dianabarsan avatar Oct 13 '22 11:10 dianabarsan

I've merged this directly, because the update required changes in both cht and cht-upgrade-service, and once the upgrade-service was merged, the upgrade e2e in cht stopped working.

AT should cover that:

  • the upgrade service starts and upgrades containers with default project name and default network
  • the upgrade service starts and upgrades containers with custom project name and custom network
  • users can start multiple CHT instances on the same machine
  • users can start multiple CHT + upgrade service on the same machine (once https://github.com/medic/cht-upgrade-service/pull/18 lands)
  • taking down upgrade service doesn't take down CHT
  • all of the above work with docker-compose v1 and docker-compose v2

mrjones edit: since this has been merged to master we can use the compose files from there:

dianabarsan avatar Oct 17 '22 12:10 dianabarsan

the upgrade service starts and upgrades containers with default project name and default network image
the upgrade service starts and upgrades containers with custom project name and custom network image
users can start multiple CHT instances on the same machine image
users can start multiple CHT + upgrade service on the same machine
taking down upgrade service doesn't take down CHT
all of the above work with docker-compose v1 and docker-compose v2

ngaruko avatar Oct 19 '22 00:10 ngaruko