docker-flow-letsencrypt
docker-flow-letsencrypt copied to clipboard
Use certbot docker image
Hi,
Maybe it's a good idea to use certbot docker image instead of curl certbot-auto package while build?
Thanks
Hey.
Last time I checked the certbot docker image it wasn't updated to the latest version and I wasn't sure if they upload a new image shorty after they release a new version. I also know of some issues on their github project regarding docker and I thought it's easier to install certbot-auto directly instead of using their image (and when their image isn't updated/working anymore this project is also not working anymore).
What do you say?
Hi,
At first I think It should be easier to update via docker compose update policy, and usually docker hub should hook with github and will trigger auto build so it should be same (if they hook it right).
Anyway I still not sure which one is a good idea, I can think of 3 approaches here
- Use Ubuntu base image (which is huge?) and curl certbot-auto package (like you did)
- Dockerfile
FROM certbot/cerbotand overlay it. - Dockerfile
FROM alpineand compose like...
version: '3'
services:
nginx-certbot:
build: .
container_name: nginx-certbot
env_file: .env
image: rabbotio/nginx-certbot
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- etc-nginx-conf.d:/etc/nginx/conf.d
- etc-ssl:/etc/ssl
- var-www:/var/www
- etc-letsencrypt:/etc/letsencrypt
- var-log-letsencrypt:/var/log/letsencrypt
depends_on:
- nginx
- certbot
links:
- nginx
- certbot
networks:
- back
deploy:
mode: global # exactly one container per swarm node
nginx:
image: nginx:alpine
container_name: nginx
env_file: .env
restart: on-failure
networks:
- back
volumes:
- etc-nginx-conf.d:/etc/nginx/conf.d
- etc-ssl:/etc/ssl
- var-www:/var/www
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
ports:
- "80:80"
- "443:443"
certbot:
image: certbot/certbot
container_name: certbot
env_file: .env
environment:
- DOMAIN=$DOMAIN
- CERTBOT_EMAIL=$CERTBOT_EMAIL
- ACME_WWWROOT=${ACME_WWWROOT:-/usr/share/nginx/html}
networks:
- back
volumes:
- etc-letsencrypt:/etc/letsencrypt
- var-log-letsencrypt:/var/log/letsencrypt
volumes:
etc-nginx-conf.d:
etc-ssl:
var-www:
etc-letsencrypt:
var-log-letsencrypt:
networks:
back:
(Just rough idea, not test yet and will need volume data container)
But it seem like docker.sock is a bad idea so I think I'll give up and use you approach instead.
Not sure you have an example nginx container trigger reload after renewal? (Yes, I'll use df)
Thanks!
I thought about using docker.sock in the next release (see testing branch) for restarting DFP because I will add the feature that the certificates are stored via docker secrets (and that's only possible via docker.sock). Unfortunately when a secret is removed or added to a running service this service is restarted automatically.
With docker.sock mounted to a running container you could reload nginx inside the running container with
docker service scale SERVICE_NAME=0
docker service scale SERVICE_NAME=1
after DFL has created the certificates.
I will think about using the certbot/cerbot image.
Really @hamburml ? That is interesting as I was trying to figure out a way to reload nginx (1 replica only) when deployed in swarm mode, without any downtime.
docker service nginx update wouldn't help, as it would stop and start nginx running with 1 replica only. What I was thinking of was scaling it to 2 replicas before applying the graceful update using the --update-parallelism flag set to 1 and then scaling it back to 1. From what I understood from your post, it looks like I could ditch all of this in favour of that recipe of yours - is that correct?
Many thanks
@PedroMD Sorry, I wasn't clear enough on my last post. I simply described what I hoped I could do (let a running service decide that this service/container should be reloaded). But unfortunately this can't be done with plain docker scale commands without a downtime (because from 1 scaled to 0 back to 1 will always have a short downtime).
I like your idea of scaling it to 2 and then use a graceful update process. But I think the same problem will still exists that when a secret is added to a service all running instances of the service will restart (because secrets are mounted as read only volumes if I am correct). If you aren't using secrets and your certificates are mounted inside DFP-service your way could work. But only when the older instance of your service is the one which is removed after the process.
possible sequence:
- cert stored in /certs
- DFP instance # 1 is started, uses cert stored in /certs
- update cert in /certs folder
- scale DFP to 2
- DFP instance # 2 is started, uses cert stored in /certs (loads the new certs, #1 still uses the old certs)
- close DFP # 1
@hamburml is right. A service cannot get a new secret without being restarted. Unless there is an undocumented workaround, that is the limitation we need to live with. Actually, it's not a limitation but a conscious design decision...
I would not have a problem is DFP is restarted occasionally. However, in case of Let's Encrypt, new certs need to be updated every few weeks. Even that is not a problem if one has only one domain. However, people tend to use multiple domains assigned to different services. That means that DFP service would have to be updated quite often and I don't think that's a good idea.
Long story short, I'd recommend using certs through secrets when they are with a longer timespan (NOT LE). For LE, sending a request to update certs is a better option. Besides, those certs are already transmitted through the network (from LE server to DFLE) so there's not much point making them a secret only half-way through.
Hello folks,
FYI I also implemented a docker-flow-proxy-letsencrypt which works with certbot/certbot base image. Both methods secrets and volumes are implemented to forward certs to docker-flow-proxy. As @vfarcic said, you need to reload dfp when updating certs with secrets.