caddy-docker-proxy
caddy-docker-proxy copied to clipboard
How do I create Docker labels for caddy-l4?
ARG CADDY_VERSION=2.9.1
FROM caddy:${CADDY_VERSION}-builder AS builder
RUN xcaddy build \
--with github.com/lucaslorentz/caddy-docker-proxy/v2 \
--with github.com/mholt/caddy-l4=github.com/vnxme/caddy-l4@caddyfile
FROM caddy:${CADDY_VERSION}-alpine
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
CMD ["caddy", "docker-proxy"]
❯ cat Dockerfile
ARG CADDY_VERSION=2.9.1
FROM caddy:${CADDY_VERSION}-builder AS builder
RUN xcaddy build \
--with github.com/lucaslorentz/caddy-docker-proxy/v2 \
--with github.com/mholt/caddy-l4=github.com/vnxme/caddy-l4@caddyfile
FROM caddy:${CADDY_VERSION}-alpine
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
CMD ["caddy", "docker-proxy"]
I created an image by writing a Dockerfile as above.
What I want is for SSH connection requests to ssh.test.com to be forwarded to the port 22 of a specific container.
Currently, caddy-docker-proxy is configured as follows:
services:
caddy:
container_name: caddy
image: my/caddy-docker-proxy-l4
restart: unless-stopped
environment:
- CADDY_INGRESS_NETWORKS=caddy
networks:
- caddy
ports:
- 22:22
- 80:80
- 443:443
...
caddy_16.layer4.:22.@git: ssh
caddy_16.layer4.:22.route: "@git"
caddy_16.layer4.:22.route.proxy.upstream: gitea:22
I have successfully configured my caddy-docker-proxy container using labels as described above.
However, I am wondering if the following setup is possible:
For example: • When connecting to gitea.domain.com, it should proxy to gitea:22. • When connecting to test.domain.com, it should proxy to test:22.
Is it possible to route SSH traffic this way using caddy-docker-proxy? I would appreciate any advice.
Is it possible to route SSH traffic this way using caddy-docker-proxy?
SSH doesn't have SNI like TLS does, you would need to wrap the connection in TLS with that information (like with openssl s_client ... via the SSH ProxyCommand option)or use some other information with L4 matchers to route by instead.
So for now you're effectively using Caddy to blindly tunnel L4 traffic from one port to another IP/port, kinda like SSH Tunneling supports. SSH proxying AFAIK tends to be done via a Jump server / Bastion host.
Here's an overview of some options I came across:
-
It seems Caddy L4 +
sshmodule aka Kadeessh (there's a JSON config example, but noCaddyfilesupport for Kadeessh) might be able to implement a proxy / bastion host? -
A user got SSH proxying working via
sshpiper.- Rather than route the connection,
sshpiperbridges two separate SSH connections (one from the client, one to the target destination), similar to a MitM attack, your SSH client connects tosshpiperusing pubkey auth to connect andsshpiperwill then act as an SSH client of it's own to the target SSH server, thus must have it'sknown_hostsconfigured for verifying trust and relevant private key to authenticate to the target server. - A Docker image is available + a quick start guide.
- 2022 discussion specific to Gitea pubkey auth.
- The
yamlplugin supports SSH certificates (the plugin README config example wasn't updated, but the relevant PR shows an example in the added test coverage). SSH certificates auth is only implemented for the kubernetes and yaml plugins. - It also seems that the
yamlplugin may be necessary if you're using modern pubkey with Ed25519 instead of RSA.
- Rather than route the connection,
-
A Caddy forum discussion on the topic references
proxytunnelfor doing SSH through HTTPS, which could use Caddy L4 to match ontls sniand then terminate that to SSH traffic sent to the upstream. The goal with HTTPS usage here is to bypass network traffic inspection detecting SSH (detailed blog article on the subject). -
warpgateandsshportalseem to be services you could run instead of Caddy to proxy SSH?
I looked into this a bit with standard OpenSSH client/server and Caddy L4. Bit verbose, but hopefully covers what you were seeking 😎
Basic L4 routing (only 1 service)
The @git ssh matcher shouldn't be necessary unless you need to distinguish the traffic from something else on the same port, so you should be able to just route anything inbound on port 22 to Caddy through to a single container with just:
services:
ssh-server:
labels:
'caddy.layer4.:22.route.proxy': "{{ upstreams 22 }}"
{{ upstreams 22 }} will become the container IP + port 22, but you can also route Docker's internal DNS (127.0.0.11:53) by whatever hostnames resolve to the container like you have with gitea:22 👍
Route SSH to more than 1 container
You will need to decide how you want to distinguish the traffic. One way is by wrapping the SSH connection with TLS.
That'll require:
- A
routeselected by matching withtls sni - TLS termination with the
tlshandler - Optionally match the connection again. This will require another matcher +
subroutehandler to nest anotherroute(at least I think this is how it's done, L4 docs are a bit sparse). proxyhandler to finally route the connection to the appropriate container.
Using import directive to simplify common config via labels
When it comes to multi-line config, such as for a L4 config block like this, I find it a bit easier to use a single CDP label with an import directive referencing a Caddyfile snippet.
NOTE: Snippets will be a relative path from the containers WORKDIR, so either user an absolute path or make sure your snippets are referenced relative to that location (it varies between CDP and Caddy images, so you may want to explicitly set working_dir: in compose.yaml)
services:
# CDP:
reverse-proxy:
working_dir: /srv
configs:
- source: caddy-l4-proxy-ssh
target: /srv/snippets/l4-proxy-ssh
gitea:
labels:
'caddy.layer4.:22.import': "snippets/l4-proxy-ssh gitea gitea.example.internal gitea:22"
# I'm using the Docker Compose `configs` feature here to embed the snippet file in this `compose.yaml`.
# You could alternatively use separate files and bind mount via `volumes` if you prefer.
configs:
caddy-l4-proxy-ssh:
content: |
@ssh-host-{args[0]} tls sni {args[1]}
route @ssh-host-{args[0]} {
tls
subroute {
@is-ssh ssh
route @is-ssh {
proxy {args[2]}
}
}
}
We have three separate args for that snippet:
- A value to make a unique matcher name for the
tls sniroute condition - The SNI value to match against.
- The upstream to route the traffic to (hostname / IP + port)
If your SSH services all listen to SSH on port 22 and share a common convention with their hostname and SNI like in the above example, you could simplify this snippet down to a single arg:
@ssh-host-{args[0]} tls sni {args[0]}.example.internal
route @ssh-host-{args[0]} {
tls
subroute {
@is-ssh ssh
route @is-ssh {
proxy {args[0]}:22
}
}
}
services:
ssh-server:
labels:
'caddy.layer4.:22.import': "snippets/l4-proxy-ssh gitea"
Snippet compatibility within Global settings
Now if you like that approach, keep in mind that you need to keep the port in the label.
caddy.layer4.importwill not work properly when there's more than one block mapping for0.0.0.0:22, CDP does not merge these (it just adds theimportdirective as part of the finalCaddyfilewhich Caddy will resolve).- You also need to rely on importing separate snippets from files like shown above as even with a custom
Caddyfile, it doesn't seem to be in scope for global settings to reference otherwise.
# Global settings:
{
# Won't work:
import some-snippet
# Will work:
import path/to/snippet/file
}
(some-snippet) {
# ...
}
# Will work:
import some-snippet
The SSH connection
To actually wrap the SSH connection with TLS, your SSH client could do this with openssl s_client .... For example with OpenSSH (standard SSH client on linux) you could either use this:
ssh \
-o ProxyCommand='openssl s_client -connect gitea.example.internal:22 -quiet' \
-o StrictHostKeyChecking=accept-new \
-i /root/.ssh/example \
user@gitea
or add to your OpenSSH client config (~/.ssh/config):
Host gitea
HostName = gitea.example.internal
IdentityFile = /root/.ssh/example
ProxyCommand = openssl s_client -connect gitea.example.internal:22 -quiet -verify_quiet
StrictHostKeyChecking = accept-new
If unfamiliar with these settings:
HostNameisn't too relevant here asProxyCommandwill instead haveopensslmake the connection to Caddy (wheregitea.example.internalresolves to), after which Caddy terminates TLS and forwards the original SSH connection.StrictHostKeyCheckingis only added here to automatically accept a new SSH host the first time you connect to it (_fairly safe convenience. If a subsequent connection notices a mismatch in the public key, it'll notify you as per usual, feel free to remove that line.IdentityFile/-ifor SSH key file authentication, can be removed if you're using password auth instead.
Now you can just connect with ssh gitea instead 😎
Other solutions (with or without Caddy) are usually going to require the extra config somewhere, but perhaps one of the other solutions I shared earlier works better for you, this is just the one I explored to try learning Caddy L4 with.
To minimize verbosity in the config, if you had quite a few services, you could do something like this instead:
Host *.example.internal
IdentityFile = /root/.ssh/example
ProxyCommand = openssl s_client -connect %h:%p -quiet -verify_quiet
StrictHostKeyChecking = accept-new
HostName will default to the Host matched value, and ProxyCommand will reference via tokens %h (HostName) + %p (Port).
This provides a generic config block so that you can connect to the two different services:
ssh gitea.example.internal
ssh test.example.internal
L4 tls hosts and providing a TLS certificate
I haven't quite looked into this that thoroughly, but assuming you need a certificate provisioned for each domain openssl would be connecting to, if you don't have a matching HTTPS service that may be a little awkward with Caddyfile?
One workaround that you can use is a wildcard certificate as a fallback:
- Via container labels for CDP:
services: reverse-proxy: # Global options first, then a separate site-address block: labels: # Prefer using a wildcard certificate when already available instead of provisioning a new cert: caddy.auto_https: prefer_wildcard # Provision a wildcard cert: caddy_1: "*.example.internal" # Double quoting required for preserving quotes (YAML string => Caddy string where inner quotes required) caddy_1.respond: '"Hello World"' - Via base
Caddyfilefor CDP:services: reverse-proxy: environment: CADDY_DOCKER_CADDYFILE_PATH: /etc/caddy/Caddyfile configs: - source: caddy-config target: /etc/caddy/Caddyfile caddy-config: content: | # Global Settings: { auto_https prefer_wildcard } # Provision a wildcard cert: *.example.internal { respond "Hello World" }
NOTE: Caddy 2.9 is required for prefer_wildcard support. CDP image tags/releases do not align with Upstream Caddy releases:
- CDP 2.9.1 provides Caddy 2.8.4
- CDP 2.9.2 provides Caddy 2.9.1
Reproduction
This is a bit lengthy, so I've split it into separate Compose files with some commentary for extra context. Should be sufficient to grok what it's doing, try it out and adapt to your own setup 👍
The main compose.yaml uses include to import and combine the other *.compose.yaml snippets. So if you create each file this will run offline (after building/pulling any images), here's the commands:
docker compose up -d --force-recreate
# Access either SSH server container:
docker compose run --rm ssh-client ssh gitea.example.internal
docker compose run --rm ssh-client ssh test.example.internal
# Cleanup (Remove related containers, networks, volumes):
docker compose down --volumes
compose.yaml:
name: example
include:
# This is the equivalent of `-f` usage in:
# docker compose -f a.compose.yaml -f b.compose.yaml ... <subcommand>
- path:
- ssh.compose.yaml
- cdp.compose.yaml
- cdp-ssh.compose.yaml
# Optional (better support for testing offline without LetsEncrypt):
- cdp-local.compose.yaml
- net.compose.yaml
cdp-ssh.compose.yaml:
# Add the Caddy L4 snippet + container label and SSH client config
# enabling Caddy to proxy SSH connections to different containers based on TLS SNI
services:
reverse-proxy:
# Optionally publish the SSH port if necessary:
ports:
#- "22:22"
# Since Docker Engine 27.0.1 the host port can be omitted
# when it would match the container port:
# Compose V2 does not require quotes for ports (no longer mistaken as hexadecimal)
- 22
configs:
- source: caddy-l4-proxy-ssh
target: /srv/snippets/l4-proxy-ssh
# Use the SSH client config to simplify `ssh` command:
ssh-client:
command: ssh gitea.example.internal
configs:
- source: ssh-hosts
target: /root/.ssh/config
# `hostname` set for better clarity during testing,
# when you SSH into either container the hostname will be used instead of the container ID:
# (you can also directly connect via `ssh gitea` instead of routing through Caddy)
ssh-server:
hostname: gitea
labels:
'caddy.layer4.:22.import': "snippets/l4-proxy-ssh gitea"
ssh-server-b:
hostname: test
labels:
'caddy.layer4.:22.import': "snippets/l4-proxy-ssh test"
# Simplified SSH client + Caddy snippet,
# Assumes all containers as subdomains to a common domain:
configs:
ssh-hosts:
content: |
Host *.example.internal
IdentityFile = /root/.ssh/example
ProxyCommand = openssl s_client -connect %h:%p -quiet -verify_quiet
StrictHostKeyChecking = accept-new
caddy-l4-proxy-ssh:
content: |
@ssh-host-{args[0]} tls sni {args[0]}.example.internal
route @ssh-host-{args[0]} {
tls
proxy {args[0]}:22
}
ssh.compose.yaml:
# Basic SSH server and client setup for local reproduction
# This would represent any SSH containers you want to proxy to
services:
# SSH from this container to `ssh-server` container via the default `command` run via:
# docker compose run --rm -it ssh-client
ssh-client:
image: localhost/ssh-client
command: ssh -o StrictHostKeyChecking=accept-new -i /root/.ssh/example root@ssh-server
build:
# NOTE: `openssl` is only added for supporting the Caddy L4 example.
# Enables wrapping an SSH connection with TLS for SNI matching support.
dockerfile_inline: |
FROM alpine
RUN apk add openssh-client openssl
# This service is only manually started (excludes this service from `docker compose up`):
scale: 0
# SSH private key for authentication (instead of password input):
configs:
- source: ssh-key-private
target: /root/.ssh/example
# Expected permissions for the SSH private key (only the owner can read/write):
mode: 0600
ssh-server:
image: localhost/ssh-server
build:
dockerfile_inline: |
FROM alpine
RUN <<HEREDOC
apk add openssh-server
# NOTE: Insecure config only for password login example:
echo "root:password" | chpasswd
echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config
# Create default `HostKey` for /etc/ssh/sshd_config config:
# (Required for the `sshd` service to start)
ssh-keygen -A
HEREDOC
CMD ["/usr/sbin/sshd", "-D"]
# Add our SSH public key for an SSH client to authenticate against:
configs:
# NOTE: The target file contents is typically 1 or more public key files concatenated
# (`/etc/ssl/certs/ca-certificates.crt` is similar in structure for TLS)
- source: ssh-key-public
target: /root/.ssh/authorized_keys
mode: 0644
# For the Caddy L4 proxy example, duplicate the `ssh-server` container config above:
# (an extra SSH server to demonstrate routing via TLS SNI)
ssh-server-b:
extends: ssh-server
# The equivalent SSH keypair files can be generated with this command:
# ssh-keygen -t ed25519 -N '' -C [email protected] -f ./example
# NOTE: When added to containers, the expected file permissions will need to be set via `configs.mode`.
configs:
ssh-key-private:
content: |
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACC+1MPvuajKGG9sK0uuJ/RqRMi3L6U2eYQUcb+tKD9q1AAAAJiZUfbimVH2
4gAAAAtzc2gtZWQyNTUxOQAAACC+1MPvuajKGG9sK0uuJ/RqRMi3L6U2eYQUcb+tKD9q1A
AAAEA4EOL0P/JgixgdU0kjxDyyePEeNW7dsOejWG15GlmHGL7Uw++5qMoYb2wrS64n9GpE
yLcvpTZ5hBRxv60oP2rUAAAAFXJvb3RAZXhhbXBsZS5pbnRlcm5hbA==
-----END OPENSSH PRIVATE KEY-----
ssh-key-public:
content: |
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL7Uw++5qMoYb2wrS64n9GpEyLcvpTZ5hBRxv60oP2rU [email protected]
cdp.compose.yaml:
services:
reverse-proxy:
container_name: cdp
image: localhost/caddy-docker-proxy:2.9.2
# Build a custom image of Caddy with CDP + L4 modules:
pull_policy: build
build:
# NOTE: `$$` is used to escape `$` as opt-out of the Docker Compose ENV interpolation feature.
dockerfile_inline: |
ARG CADDY_VERSION=2.9.1
FROM caddy:$${CADDY_VERSION}-builder AS builder
RUN xcaddy build $${CADDY_VERSION} \
--with github.com/lucaslorentz/caddy-docker-proxy/v2 \
--with github.com/mholt/caddy-l4
FROM caddy:$${CADDY_VERSION}-alpine
COPY --link --from=builder /usr/bin/caddy /usr/bin/caddy
CMD ["caddy", "docker-proxy"]
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "80:80"
- "443:443"
- "443:443/udp" # HTTP/3
# Optional: Provision a wildcard cert for convenience.
# If not testing with locally signed certs, this will require ACME DNS provider configured
environment:
CADDY_DOCKER_CADDYFILE_PATH: /etc/caddy/Caddyfile
configs:
- source: caddy-config
target: /etc/caddy/Caddyfile
configs:
caddy-config:
content: |
# Global Settings:
{
auto_https prefer_wildcard
}
*.example.internal {
# Use `respond` instead of `abort` if you're troubleshooting connectivity:
# respond "hello world"
# Reject connection when subdomain has no configured container:
abort
}
cdp-local.compose.yaml:
# Optional: For testing purposes (otherwise defaults to a public CA like LetsEncrpt)
# Have Caddy provision leaf certs locally (self-signed) via a private CA cert that Caddy
# creates on first run, and share that with other containers via a named data volume
# so that those containers can verify the chain of trust successfully when connecting to Caddy.
volumes:
caddy-ca:
name: caddy-ca
services:
reverse-proxy:
# Enable the global option `local_certs` to default cert provisioning to Caddy as a private CA:
labels:
caddy.local_certs:
volumes:
- caddy-ca:/data/caddy/:rw
# Using a named data volume, the top-level relative `subpath`
# allows mounting only that file from the volume into this
# container at the `target` path.
# A bind mount volume could be used instead (simpler):
# volumes:
# - ./data/caddy/pki/authorities/local/root.crt:/etc/ssl/certs/ca-certificates.crt:ro
#
# NOTE: Normally you'd instead have your certificate mounted at
# `/usr/local/share/ca-certificates/caddy.crt` and update the
# `ca-certificates.crt` file via the `update-ca-certificates` command.
# Since this is the only CA cert needed for the reproduction to work,
# I've simplified that by directly replacing this file in the container.
ssh-client:
volumes:
- type: volume
source: caddy-ca
target: /etc/ssl/certs/ca-certificates.crt
read_only: true
volume:
nocopy: true
subpath: pki/authorities/local/root.crt
net.compose.yaml:
# Useful tips for networking with Compose
services:
reverse-proxy:
networks:
public:
# Optional (used for this self-contained example).
# Containers on the `isolated-network` (default) will resolve
# these DNS names to the associated reverse-proxy service network IP:
# (also useful if containers should skip equivalent public DNS)
default:
aliases:
- gitea.example.internal
- test.example.internal
# Optional:
# - More secure private network (when proxied containers don't require external network access)
# - Restrict ports published to Docker host (when desirable to avoid external hosts connecting)
networks:
# Default containers to a network where containers can reach each other,
# but no access to the internet:
default:
name: isolated-network
internal: true
# The `isolated` gateway mode creates the internal network without a gateway.
# - This denies the host from connecting to an internal network container IP
# (likewise for container to host via gateway IP).
# - You can test this via getting the container IP (`172.18.0.2` for `cdp` in this case):
# docker container inspect cdp | jq -r '.[].NetworkSettings.Networks.["isolated-network"].IPAddress'
# Now curl the IP with `--connect-to ::<IP here>` to skip DNS resolution,
# without this `driver_opt` you'll receive a response.
# curl --insecure --location --connect-to ::172.18.0.2 hello.example.internal
driver_opts:
com.docker.network.bridge.gateway_mode_ipv4: isolated # Requires Docker Engine v28.0+
# For the reverse-proxy to have public/host access (internet bridged):
public:
# NOTE: This is a better approach than repeating an explicit bind IP for each container port to publish
# Only publishes container ports to the loopback network interface of the Docker host:
# Default is `0.0.0.0` (all interfaces, thus public + docker bridge gateway IPs)
driver_opts:
com.docker.network.bridge.host_binding_ipv4: 127.0.0.1
it doesn't seem to be in scope for global settings to reference otherwise
This does work in Caddyfile without CDP (using caddy:alpine image):
(some-snippet) {
# ...
}
# Global settings:
{
# Will work:
import some-snippet
}
# Will work:
import some-snippet