Caddy config not reloaded
Most of the time, the config is not reloaded automatically, I need to restart the caddy-docker-proxy container to reload the config.
For example, I had an already added domain, I added basicauth to that domain via docker-compose labels. The change was not visible, until I restarted the proxy container.
Where should I look, what is causing this issue?
You need to run docker compose up -d to have Docker recreate the containers that had labels changed. Labels are set at container creation time, they aren't changed if you just save your compose.yml.
If that's not the problem, then please share more about your setup and the steps you take. Can't help without more information, we're forced to make assumptions.
Sorry, yes I was a vague about my setup. I actually do recreate the containers after changing the labels. I just can't see any logs/sign that the proxy even notices the change in the labels.
Caddy version: v2.7.6
My Dockerfile:
ARG CADDY_VERSION=latest
FROM caddy:builder AS builder
RUN xcaddy build \
--with github.com/lucaslorentz/caddy-docker-proxy/v2 \
--with github.com/caddy-dns/cloudflare \
--with github.com/greenpau/caddy-security \
--with github.com/WeidiDeng/caddy-cloudflare-ip
FROM caddy:${CADDY_VERSION}
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
CMD ["caddy", "docker-proxy"]
An example docker-compose.yml:
services:
frontail:
container_name: frontail
build: .
command: /log/logger/logfile --ui-highlight --ui-highlight-preset /preset/basic.json --theme basic -l 5000 -n 500 --disable-usage-stats
volumes:
- ./preset:/preset
restart: always
networks:
- default
- caddy
labels:
caddy: test.example.com
caddy.reverse_proxy: "{{ upstreams 9001 }}"
caddy.basicauth.user: ${AUTH_KEY}
env_file:
- ./.env
environment:
TZ: Europe/Budapest
networks:
caddy:
name: caddy
external: true
Here is my redacted caddy-config, which is generated upon container restart:
{
order authenticate before respond
order authorize before reverse_proxy
acme_dns cloudflare REDACTED_API_KEY
email [email protected]
log access {
format json
include http.log.access http.handlers.reverse_proxy
level DEBUG
output file /var/log/caddy/access.log {
roll_keep 5
roll_keep_for 2160h
roll_size 1gb
}
}
log default {
exclude http.log.access
format json
level INFO
output file /var/log/caddy/runtime.log {
roll_keep 3
roll_keep_for 720h
roll_size 500mb
}
}
security {
authentication portal example_sso {
crypto default token lifetime 3600
enable identity provider generic
cookie domain example.hu
ui {
links {
Profile /auth/profile icon "las la-user-cog"
"Who Am I" /auth/whoami icon "las la-user"
}
meta author "Example"
meta description "Authentication Portal"
meta title "Web Authentication Portal"
static_asset assets/images/example.png images/png /assets/img/example.png
static_asset assets/images/example_logo.png images/png /assets/img/example_logo.png
template login /assets/templates/example/login.template
template portal /assets/templates/example/portal.template
template whoami /assets/templates/example/whoami.template
}
transform user {
action add role authp/admin
match realm generic
}
}
authorization policy logger_policy {
set auth url https://auth.example.com/auth
allow roles authp/admin
}
oauth identity provider generic {
realm generic
driver generic
client_id caddy_REDACTED
client_secret REDACTED_35cf5ab1e1c823d44d86ade590562fd4
scopes openid email profile
base_auth_url https://sso.example.com
metadata_url https://sso.example.com/.well-known/openid-configuration
}
authentication portal example_sso_cloud {
crypto default token lifetime 3600
enable identity provider generic
cookie domain example.cloud
ui {
links {
Profile /auth/profile icon "las la-user-cog"
"Who Am I" /auth/whoami icon "las la-user"
}
meta author "example"
meta description "Authentication Portal"
meta title "Web Authentication Portal"
static_asset assets/images/example.png images/png /assets/img/example.png
static_asset assets/images/example_logo.png images/png /assets/img/example_logo.png
template login /assets/templates/example/login.template
template portal /assets/templates/example/portal.template
template whoami /assets/templates/example/whoami.template
}
transform user {
action add role authp/admin
match realm generic
}
}
}
servers {
trusted_proxies cloudflare {
interval 12h
timeout 15s
}
}
}
:2019 {
metrics
}
auth.example.cloud {
route /auth* {
authenticate with example_sso_cloud
}
route /* {
redir https://auth.example.cloud/auth/ 302
}
}
auth.example.hu {
route /auth* {
authenticate with example_sso
}
route /* {
redir https://auth.example.hu/auth/ 302
}
}
automate.example.cloud {
reverse_proxy 192.168.5.10:8080
}
log.example.cloud {
authorize with logger_policy
reverse_proxy 192.168.5.2:9001
}
dev.phpadmin.example.cloud {
reverse_proxy 192.168.5.8:80
}
docs.example.cloud {
reverse_proxy 192.168.5.13:3000
}
drone.example.hu {
reverse_proxy 192.168.5.9:80
}
git.example.cloud {
reverse_proxy 192.168.5.22:80
}
graphs.example.cloud {
reverse_proxy 192.168.5.21:3000
}
influx.example.cloud {
reverse_proxy 192.168.5.18:8086
}
log.example.cloud {
authorize with logger_policy
reverse_proxy 192.168.5.5:9001
}
example.app, www.example.app {
redir * https://example.hu/
}
example.cloud, erp.example.cloud {
reverse_proxy 192.168.5.12:80 {
header_down Strict-Transport-Security max-age=15552000;
}
}
@francislavoie Do you have any idea why it is not working like this? I have tried rebuilding, removing parts of config, etc... still no luck.
Like I said, you need to recreate the containers to have label changes take effect. You can't just edit your docker-compose.yml and expect it to have an effect.
And as I said before, I recreate the containers all the time. Usually my workflow which produces this error:
- Container updated, pulled from repo:
docker compose pull - Recreate container to update to its new version:
docker compose up -d - Container's IP address changes in the
caddynetwork caddy-docker-proxystill has the old IP address, config is not updated.