caddy-docker-proxy
caddy-docker-proxy copied to clipboard
layer4 configuration block
Following #342, the layer4 plugin can be configured using Caddyfile.
I'm trying to get one of the examples working, but I'm not sure how to set the keys in the correct way.
Example Caddyfile:
{
layer4 {
127.0.0.1:5000 {
route {
tls
echo
}
}
}
}
I have tried this compose file (ignore using whoami, just an example):
services:
whoami:
image: traefik/whoami
networks:
- caddy
labels:
caddy.layer4."127.0.0.0.1:5000".route.tls:
caddy.layer4."127.0.0.0.1:5000".route.echo:
networks:
caddy:
external: true
But I get this Caddyfile which (correctly) causes an error:
{
layer4 {
`\"127` {
0 {
0 {
0 {
`1:5000\"` {
route {
echo
tls
}
}
}
}
}
}
}
}
I've tried escaping the .s with \ and that also doesn't work.
This simple example could be done in a base Caddyfile, but I would like to be able to use layer4 for services defined via labels.
I'm also running into this. Watching to see if this issue gets answered.
Planning on try the solution here to see if it's a viable workaround.
Success! I'm attempting to do DNS-over-TLS with pihole, and this is what my labels look like after setting DOT_INGRESS_ADDR to 0.0.0.0:853 on my main caddy container:
"caddy_1.layer4.0_{$DOT_INGRESS_ADDR}": ""
"caddy_1.layer4.0_{$DOT_INGRESS_ADDR}.@pihole_host": "tls sni pihole.my.domain"
"caddy_1.layer4.0_{$DOT_INGRESS_ADDR}.route": "@pihole_host"
"caddy_1.layer4.0_{$DOT_INGRESS_ADDR}.route.0_tls": ""
"caddy_1.layer4.0_{$DOT_INGRESS_ADDR}.route.1_proxy": "{{ upstreams 53 }}"
With this approach, I can successfully run doggo google.com A @tls://pihole.my.domain and get an answer.
@coandco Looks a little bit complex to me comparing to just use the Caddy json config which has more capabilities as well.
I mean, sure, but if you're using caddy-docker-proxy you don't really have the option of using the json config.
@coandco indeed; which is sad to be honest.
@coandco If you want caddy to listen on all available interfaces (0.0.0.0) you can do that without variables:
labels:
"caddy_1.layer4.:22.route.proxy": "{{upstreams 22}}"
For anyone subscribed here I commented in another Caddy L4 issue with an example of how to use import directive with file snippets as an alternative for more verbose config labels.
It is a useful technique when you need more dynamic config like the example I answered which had with proxying to multiple SSH hosts based on a matcher.
For reference this is the more verbose example, which would have been messy with multiple labels like the pihole example a few comments up:
services:
# CDP:
reverse-proxy:
working_dir: /srv
configs:
- source: caddy-l4-proxy-ssh
target: /srv/snippets/l4-proxy-ssh
gitea:
labels:
'caddy.layer4.:22.import': "snippets/l4-proxy-ssh gitea gitea.example.internal gitea:22"
# I'm using the Docker Compose `configs` feature here to embed the snippet file in this `compose.yaml`.
# You could alternatively use separate files and bind mount via `volumes` if you prefer.
configs:
caddy-l4-proxy-ssh:
content: |
@ssh-host-{args[0]} tls sni {args[1]}
route @ssh-host-{args[0]} {
tls
subroute {
@is-ssh ssh
route @is-ssh {
proxy {args[2]}
}
}
}
And the reproduction variant that simplified the args down to 1:
services:
reverse-proxy:
working_dir: /srv
configs:
- source: caddy-l4-proxy-ssh
target: /srv/snippets/l4-proxy-ssh
gitea:
labels:
'caddy.layer4.:22.import': "snippets/l4-proxy-ssh gitea"
configs:
caddy-l4-proxy-ssh:
content: |
@ssh-host-{args[0]} tls sni {args[0]}.example.internal
route @ssh-host-{args[0]} {
tls
proxy {args[0]}:22
}
Success! I'm attempting to do DNS-over-TLS with pihole, and this is what my labels look like after setting
DOT_INGRESS_ADDRto0.0.0.0:853on my main caddy container:"caddy_1.layer4.0_{$DOT_INGRESS_ADDR}": "" "caddy_1.layer4.0_{$DOT_INGRESS_ADDR}.@pihole_host": "tls sni pihole.my.domain" "caddy_1.layer4.0_{$DOT_INGRESS_ADDR}.route": "@pihole_host" "caddy_1.layer4.0_{$DOT_INGRESS_ADDR}.route.0_tls": "" "caddy_1.layer4.0_{$DOT_INGRESS_ADDR}.route.1_proxy": "{{ upstreams 53 }}"With this approach, I can successfully run
doggo google.com A @tls://pihole.my.domainand get an answer.
What does your caddy file output look like with this configuration?
What does your caddy file output look like with this configuration?
cat /config/caddy/Caddyfile.autosave:
{
layer4 {
0.0.0.0:853 {
@pihole_host tls sni pihole.my.domain
route @pihole_host {
tls
proxy pihole-ip-here:53
}
}
}
}
Reproduction
It's not PiHole but simple example:
# Start Caddy and Blocky DNS:
$ docker compose -f compose.yaml -f local.compose.yaml up -d --force-recreate
# Query DNS via DoT through Caddy:
$ docker compose -f compose.yaml -f local.compose.yaml run --rm -it dns-client dns.example.internal A @tls://dns.example.internal
NAME TYPE CLASS TTL ADDRESS NAMESERVER
dns.example.internal. A IN 3600s 192.168.0.42 dns.example.internal:853
# Again using default DNS (127.0.0.11:53 in the container) without DoT:
$ docker compose -f compose.yaml -f local.compose.yaml run --rm -it dns-client dns.example.internal A
NAME TYPE CLASS TTL ADDRESS NAMESERVER
dns.example.internal. A IN 600s 172.21.0.4 127.0.0.11:53
# Caddyfile generated by CDP:
$ docker compose -f compose.yaml -f local.compose.yaml exec -it reverse-proxy cat /config/caddy/Caddyfile.autosave
{
local_certs
auto_https prefer_wildcard
layer4 {
:853 {
@dns_host tls sni dns.example.internal
route @dns_host {
tls
proxy dns-blocky:53
}
}
}
}
*.example.internal {
abort
}
compose.yaml:
services:
reverse-proxy:
container_name: cdp
image: localhost/caddy-docker-proxy:2.9.2
# Build a custom image of Caddy with CDP + L4 modules:
pull_policy: build
build:
# NOTE: `$$` is used to escape `$` as opt-out of the Docker Compose ENV interpolation feature.
dockerfile_inline: |
ARG CADDY_VERSION=2.9.1
FROM caddy:$${CADDY_VERSION}-builder AS builder
RUN xcaddy build \
--with github.com/lucaslorentz/caddy-docker-proxy/[email protected] \
--with github.com/mholt/caddy-l4
FROM caddy:$${CADDY_VERSION}-alpine
COPY --link --from=builder /usr/bin/caddy /usr/bin/caddy
CMD ["caddy", "docker-proxy"]
volumes:
- /var/run/docker.sock:/var/run/docker.sock
dns:
image: ghcr.io/0xerr0r/blocky
container_name: dns-blocky
configs:
- source: blocky-config
target: /app/config.yml
labels:
"caddy.layer4.:853": ""
"caddy.layer4.:853.@dns_host": "tls sni dns.example.internal"
"caddy.layer4.:853.route": "@dns_host"
"caddy.layer4.:853.route.0_tls": ""
"caddy.layer4.:853.route.1_proxy": "dns-blocky:53"
dns-client:
scale: 0 # Prevent this container starting with `docker compose up`
image: ghcr.io/mr-karan/doggo:latest
configs:
# Very basic config example:
blocky-config:
content: |
# Any query not handled by `customDNS` is queried through the upstreams:
upstreams:
groups:
default:
- 1.1.1.1 # Cloudflare
customDNS:
# All queries to `example.internal` (and any child DNS labels belonging to it) will respond with this IP:
mapping:
example.internal: 192.168.0.42
This 2nd Compose file is just for reproducing locally offline, local provisioned wildcard cert trusted with the dns-client container + network alias to ensure connection is through Caddy.
local.compose.yaml:
services:
reverse-proxy:
# For local testing, points the blocky DNS name to this Caddy container:
networks:
default:
aliases:
- dns.example.internal
# Optional: Provision a wildcard cert for convenience.
# NOTE: If not testing with locally signed certs, this will require an ACME DNS provider configured
environment:
CADDY_DOCKER_CADDYFILE_PATH: /etc/caddy/Caddyfile
configs:
- source: caddy-config
target: /etc/caddy/Caddyfile
volumes:
- ./data/caddy/:/data/caddy/:rw
# For DoT DNS queries, mount the Caddy private CA cert into the container to be trusted:
dns-client:
volumes:
- ./data/caddy/pki/authorities/local/root.crt:/etc/ssl/certs/ca-certificates.crt:ro
configs:
caddy-config:
content: |
# Global Settings:
{
local_certs
auto_https prefer_wildcard
}
# Fallback if subdomain was not proxied abort connection:
# (Also provisions the wildcard for all subdomains to use)
*.example.internal {
abort
}
So as you can see from the above, instead of all those extra labels, you could take the snippet approach I mentioned in my last comment:
services:
reverse-proxy:
working_dir: /srv
configs:
- source: caddy-l4-proxy-host
target: /srv/snippets/l4-proxy-host
# Use just a single label via snippet import:
dns:
labels:
'caddy.layer4.:853.import': "snippets/l4-proxy-host dns 53"
configs:
# Re-usable snippet for common TLS SNI matcher routing to container port:
caddy-l4-proxy-host:
content: |
@host-{args[0]} tls sni {args[0]}.example.internal
route @host-{args[0]} {
tls
proxy {args[0]}:{args[1]}
}