acme-companion
acme-companion copied to clipboard
I rewrote this to work with docker swarm
Hi. I rewrote this project to work with docker-swarm. Anyway I wanted to link it here for a few reasons 1) I'd like you to feel free to take any code you'd like from the project in order to make this project swarm-enabled, even take it all I dont need to maintain my own project. 2) So the authors can also comment or review if they wish.
Keep in mind I havent had a chance to look at the readmes or anything, I slapped this together quickly just a week or so ago and have been using it on my own swarm for about a week now. The repo I link below also contains a container that handles the letsencrypt side of things as well as has a seperate container that replaces nginx-proxy. The main concept is the same though and in fact the letsencrypt side reuses the vast majority of the current code.
https://git.qoto.org/modjular/swarm-proxy
Hi @freemo, I've stumbled into this while trying to figure out how to get the nginx-proxy setup to work with a Swarm and figured I'll try out your rewrite. Could you post an example docker-compose.yml for the stack? I've done some minor rewriting to make the images build on ARM (because my swarm runs on Raspberry Pis) and I'd love to submit a PR to your project to make that work, but I can't currently get it running sensibly to test that my changes work...
@fpiesche Sure I'd be happy to. Though im not sure it would be welcome here and it may be more appropriate if you ask this and related questions on the repo at the link I provided. I dont want the developers of this repo to think I'm trying to steal their thunder or anything. With that said while I do encourage you to ask questions about my fork at my repo I will provide the example compose files here since the reason for opening this issue is to encourage the project maintainer to integrate my fork if they wish. As such having example compose files might help them facilitate that.
Here is the exampel compose file I use for brining up the load-balancer itself (swarm-proxy + swarm-proxy-letsnecrypt, the latter being the fork of thi project):
services:
proxy:
image: modjular/swarm-proxy
ports:
- target: 80
published: 80
protocol: tcp
mode: host
- target: 443
published: 443
protocol: tcp
mode: host
dns:
- 8.8.8.8
healthcheck:
test: ["CMD-SHELL", "curl --head --silent --fail http://localhost/.well-known/acme-challenge/active.html || exit 1"]
start_period: 15s
deploy:
mode: global
placement:
constraints:
- "node.labels.accepting==load-balancer"
resources:
limits:
memory: 128M
reservations:
memory: 70M
volumes:
- type: volume
source: html
target: /usr/share/nginx/html
- type: volume
source: certs
target: /etc/nginx/certs
- type: volume
source: vhostd
target: /etc/nginx/vhost.d
- type: volume
source: confd
target: /etc/nginx/conf.d
- type: volume
source: servd
target: /etc/nginx/serv.d
- type: volume
source: locd
target: /etc/nginx/loc.d
- type: volume
source: dhparam
target: /etc/nginx/dhparam
- "/var/run/docker.sock:/var/run/docker.sock:ro"
letsencrypt:
image: modjular/swarm-proxy-letsencrypt
dns:
- 8.8.8.8
deploy:
replicas: 1
placement:
constraints:
- "node.labels.accepting==all"
- "node.role==manager"
resources:
limits:
memory: 128M
reservations:
memory: 70M
volumes:
- type: volume
source: html
target: /usr/share/nginx/html
- type: volume
source: certs
target: /etc/nginx/certs
- type: volume
source: vhostd
target: /etc/nginx/vhost.d
- type: volume
source: confd
target: /etc/nginx/conf.d
- type: volume
source: servd
target: /etc/nginx/serv.d
- type: volume
source: locd
target: /etc/nginx/loc.d
- type: volume
source: dhparam
target: /etc/nginx/dhparam
- "/var/run/docker.sock:/var/run/docker.sock:ro"
volumes:
html:
vhostd:
confd:
servd:
locd:
certs:
dhparam:
Notice some of the above, such as placements, isnt strictly needed but I personally find it useful, but you are free to modify placements as fits your own situation. Just keep in mind there should only be one instance of letsencrypt and there can be as many instances of the swarm-proxy as you wish. Also keep in mind that letsencrypt and swarm-proxy do need to reside on manager nodes, but they do not need to be replicated on to every manager node. Also letsencrypt does not need to be on the same physical box as swarm-proxy but they do both need to share the same volumns (in my case via NFS).
Here is an example of bringing up a simple service behind swarm-proxy that has only a single service.
version: '3.7'
services:
app:
image: requarks/wiki:2
ports:
- "8087:3000"
dns:
- 8.8.8.8
environment:
- DB_TYPE=postgres
- DB_HOST=mydatabase.com
- DB_PORT=5432
- DB_USER=some_user_name
- DB_PASS=my_super_secret_password
- DB_NAME=some_db_name
volumes:
- data:/wiki/data/content
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- "node.labels.accepting==all"
labels:
swarm-proxy: "true"
swarm-proxy-upstream: "host.docker.internal"
swarm-proxy-port: "8087"
swarm-proxy-host: "docs.cleverthis.com"
swarm-proxy-email: "[email protected]"
resources:
limits:
memory: 1G
reservations:
memory: 128M
volumes:
data:
One note about the above the "swarm-proxy-upstream" label basically tells it what ip address or equivalent dns hostname will have the port being provided and listed (8087). What you set this to of course depends on how you structure your swarm. I specifically use "host.docker.internal" because it is the ip address for the host machine automatically discovered in my containers. By default only mac and i think windows containers have this hosts definition, linux does not. I do run my instance on linux so i actually wrote a base container that all my images are based on that automatically discovers the host ip and adds a line to the host file that defines host.docker.internal. This allows my containers to act in a way similar to the feature provided in mac and windows and still be somewhat "standard" in how i do it. With that said you should probably figure out how you will solve this issue in your own cloud and modify this label accordingly.
An interesting note about the meaning of the upstream label and why this works. The swarm uses a sort of shared port allocation across all system (they call it load balancing IIRC). What this means is if a container on physical computer A listens on port 8087 on its local host computer, then another container on physical computer B also has the host port 8087 open and will be transparently routed to the port that is open on system A should you connect to it. In this way in a swarm you do not need to know where a service is hosted, you only need to know what port it is claiming and you can reach it from anywhere in the swarm as a local port. That is why the host.docker.internal works.. I set this as the same for all my hosted containers and there is no need to change it. The load balancer just connects tot he local host IP in all cases and it routes. But again this only works in my scenario, for you you will need to set this value to what makes sense for your own setup.
Finally I want to share one final compose example. This will show off an additional feature, one that is unique to my fork as far as i know. That is the ability for a single service to host two seperate ports each with an entierly separate certificate, before this was only possible if you made them seperate services.
version: "3.6"
services:
app:
image: gitlab/gitlab-ee:latest
ports:
- "22:22"
- "8083:80"
- "4443:4443"
- "8080:8888"
- "8091:8091"
dns:
- 8.8.8.8
volumes:
- main-data:/var/opt/gitlab
- main-log:/var/log/gitlab
- main-conf:/etc/gitlab
- pages-html:/var/www
- pages-letsencrypt:/etc/letsencrypt
environment:
GITLAB_OMNIBUS_CONFIG: "from_file('/omnibus_config_main.rb')"
configs:
- source: omnibus-main
target: /omnibus_config_main.rb
secrets:
- db_password
- smtp_password
healthcheck:
disable: false
start_period: 600s
interval: 120s
timeout: 60s
retries: 15
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- "node.labels.accepting==all"
restart_policy:
condition: any
delay: 5s
max_attempts: 3
window: 600s
labels:
swarm-proxy: "true"
swarm-proxy-upstream: "host.docker.internal"
swarm-proxy-port: "8083"
swarm-proxy-host: "git.qoto.org"
swarm-proxy-email: "[email protected]"
swarm-proxy-upstream_chat: "host.docker.internal"
swarm-proxy-port_chat: "8091"
swarm-proxy-host_chat: "chat.git.qoto.org"
swarm-proxy-email_chat: "[email protected]"
resources:
limits:
memory: 8G
reservations:
memory: 3G
runner:
image: gitlab/gitlab-runner:latest
ports:
- "8095:8095"
volumes:
- runner-conf:/etc/gitlab-runner
- runner-home:/home/gitlab-runner
dns:
- 8.8.8.8
healthcheck:
test: ["CMD-SHELL", "curl -k https://localhost:8095 || exit 1"]
disable: false
interval: 120s
timeout: 60s
retries: 20
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- "node.labels.accepting==all"
restart_policy:
condition: any
delay: 120s
max_attempts: 5
window: 600s
resources:
limits:
memory: 128M
reservations:
memory: 32M
volumes:
main-conf:
main-data:
main-log:
workhorse:
pages-html:
pages-conf:
pages-letsencrypt:
pages-data:
pages-log:
runner-conf:
runner-home:
configs:
omnibus-main:
file: ./create_omnibus_main.rb
secrets:
db_password:
file: ./db_password
smtp_password:
file: ./smtp_password
The important part in the above is under the labels section. Notice I use the default labels as per the other section, but then i repeat them (all but the enabled label) but append a _chat to the end of each of them. By doing this it is able to create the two-in-one effect I described each with its own routing and certificate. In general you accomplish this by appending an underscore and then whatever name you want as an id after that. just make sure they all follow the same pattern and you are good to go.
@freemo I'm fine with the discussion happening here 👍
@freemo I'm fine with the discussion happening here
Thank you for the green light. I was quite worried about stepping on any toes or being seen as trying to pull the carpet out from under you.