caddy-docker-proxy
caddy-docker-proxy copied to clipboard
TCP tunneling feature request
This plugin is extremely useful when proxying tcp connections over the same ports as http:
https://github.com/mholt/caddy-l4
Currently the plugin does not support Caddyfiles, but instead requires a json config (which isn't supported here)
Right now, I use traefik for TCP tunneling on a different port, but for a lot of use cases that's overkill. It'd be great to see the caddy-l4 plugin baked in here
This plugin is entirely designed around outputting Caddyfile configs. It would essentially require a total rewrite to output JSON instead.
Best wait until caddy-l4 gains Caddyfile support.
Understood. One thing I would suggest, that would enable support for a lot of odd plugins, is a "merge" functionality:
As a potential implementation, your code can first "adapt" the generated Caddyfile to json (this may already happen):
caddy adapt --config CaddyFile.json
Then your code could check two environment variables:
JSON_CONFIG_MERGE - This could store a jsonvalue to be merged, or blank for no change. DEFAULT="" EX:{"logging":{"logs":{"default":{"level":"DEBUG"}}}}
JSON_CONFIG_MERGE_COMMAND - This could store a jq command that can be used to combine the 2 json objects. DEFAULT=".[0] * .[1]" EX:"add"
Then, if JSON_CONFIG_MERGE is not blank, you could have a script run:
if [ ! -z "$JSON_CONFIG_MERGE" ]; then
caddyConfigJson=`cat CaddyFile.json`
jq -s $JSON_CONFIG_MERGE_COMMAND \
<(echo "${caddyConfigJson}") \
<(echo "${JSON_CONFIG_MERGE}") > CaddyFile.json
fi
For example, using this as an example CaddyFile.json
{
"apps": {
"http": {
"servers": {
"srv0": {
"listen": [
":443"
],
"routes": [{
"match": [{
"host": [
"portainer.example.com"
]
}
],
"handle": [{
"handler": "subroute",
"routes": [{
"handle": [{
"handler": "reverse_proxy",
"upstreams": [{
"dial": "10.0.11.8:8080"
}
]
}
]
}
]
}
],
"terminal": true
}
]
}
}
}
}
}
We could do the following:
JSON_CONFIG_MERGE='{"logging":{"logs":{"default":{"level":"DEBUG"}}}}'
JSON_CONFIG_MERGE_COMMAND=add
caddyConfigJson=`cat CaddyFile.json`
jq -s $JSON_CONFIG_MERGE_COMMAND \
<(echo "${caddyConfigJson}") \
<(echo "${JSON_CONFIG_MERGE}") > CaddyFile.merged.json
Finally here is the merged file CaddyFile.merged.json:
{
"apps": {
"http": {
"servers": {
"srv0": {
"listen": [
":443"
],
"routes": [
{
"match": [
{
"host": [
"portainer.example.com"
]
}
],
"handle": [
{
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "reverse_proxy",
"upstreams": [
{
"dial": "10.0.11.8:8080"
}
]
}
]
}
]
}
],
"terminal": true
}
]
}
}
}
},
"logging": {
"logs": {
"default": {
"level": "DEBUG"
}
}
}
}
I had a closer look at your code, and it looks like it's in GO. Maybe something similar to the above could be achieved with: https://pkg.go.dev/github.com/RaveNoX/go-jsonmerge
I think this is doable, and we had requests before to support json configuration. Maybe we can use json patch syntax.
Damn, was coming to ask for advice on how to make the L4 plugin work as well. Hoping to get SSH access over port 443 like Github provides to get around stupid work firewalls. I don't know Go. Would this be something a novice could implement if I tried or will I best to just wait? JSON support would be great.
Caddy-l4 combined with caddy-docker-proxy would be great! At the moment I use an older sshpiper with docker-gen and a custom quick and dirty script to proxy SSH / sftp.
I agree with @binaryben, I've been trying to do the same thing for a while to get a VPN running such that I can use something like vpn.example.com to access the VPN container, but of course Caddy normally only does HTTP traffic so that won't work without mholt/caddy-l4 support.
This can be partially implemented with https://github.com/RussellLuo/caddy-ext/tree/master/layer4
Build caddy image with github.com/RussellLuo/caddy-ext/layer4 and github.com/lucaslorentz/caddy-docker-proxy/v2. Then you can define rules like this:
layer4:
restart: always
init: true
network_mode: none
read_only: true
image: alpine
command: sleep infinity
labels:
caddy.layer4.:21101.proxy.to: 192.168.100.101:3000
caddy.layer4.:21069.proxy.to: 192.168.100.69:3000
# ....
This can be partially implemented with https://github.com/RussellLuo/caddy-ext/tree/master/layer4
Build caddy image with
github.com/RussellLuo/caddy-ext/layer4andgithub.com/lucaslorentz/caddy-docker-proxy/v2. Then you can define rules like this:layer4: restart: always init: true network_mode: none read_only: true image: alpine command: sleep infinity labels: caddy.layer4.:21101.proxy.to: 192.168.100.101:3000 caddy.layer4.:21069.proxy.to: 192.168.100.69:3000 # ....
This is exactly what I'm looking for a the moment as I need sticky TCP connections for IMAP where Traefik doesn't support these.
Can you clear something more out ? Can you make an example for (swarm) services for an example ?
github.com/lucaslorentz/caddy-docker-proxy/v2 doesn't work but he has something in github.com/lucaslorentz/caddy-docker-proxy tho.
Some more (working) detailed info would be great!
Thanks!
Hi, @di-rect
Sorry, I didn't get what you mean exactly. What problem do you experience? That snippet is part of my configs (with ports and IPs replaced with more generic ones). I don't use Swarm, but I guess moving labels into deploy section would be enough.
If you see any errors with this kind of setup in caddy's logs - that would make it easier to help.
In the snippet I provided I use a dummy container, that only runs sleep infinity. Its sole purpose is to provide the labels. I do that to make configuration "ad-hoc".
Anyway, here's a full docker-compose.yml example.
version: "3.9"
volumes:
data_caddy: {}
networks:
ingress:
x-constants:
- &default_restart unless-stopped
x-snippets:
environment: &env
PUID:
PGID:
TZ:
networks: &net
ingress:
x-templates:
default: &default
environment: *env
networks: *net
restart: *default_restart
dummy: &dummy
restart: *default_restart
environment: *env
init: true
network_mode: none
read_only: true
image: alpine
command: sleep infinity
services:
caddy:
<<: *default
container_name: caddy
build:
context: build/caddy
args:
VERSION: ${CADDY_VERSION}
GOPROXY: https://goproxy.io|https://proxy.golang.org|direct
PLUGINS: >-
github.com/RussellLuo/caddy-ext/layer4
github.com/lucaslorentz/caddy-docker-proxy/v2
ports:
- "80:80/tcp"
- "443:443/tcp"
- "443:443/udp"
- "9143:9143/tcp"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /config/caddy:/config:ro
- data_caddy:/data
environment:
<<: *env
BASE_DOMAIN: # passed from .env
CADDY_DOCKER_CADDYFILE_PATH: /config/Caddyfile.globals
CADDY_DOCKER_PROCESS_CADDYFILE: "true"
XDG_CONFIG_HOME: /data
labels:
caddy.log: default
caddy.log.format: json
example1:
<<: *dummy
labels:
caddy.layer4.:9143.proxy.to: 192.168.100.100:143
example2:
<<: *default
image: marcnuri/port-forward
environment:
REMOTE_HOST: 192.168.100.100
REMOTE_PORT: 143
ports:
-
target: 80
published: 143
protocol: tcp
mode: host
Also .env file has
PUID=1052
PGID=1052
TZ=Europe/Moscow
CADDY_VERSION=2.7.5
BASE_DOMAIN=my-domain.example
And in build/caddy/Dockerfile I have
# syntax=docker/dockerfile-upstream:master-labs
ARG VERSION=2.6.2
FROM caddy:${VERSION}-builder as builder
ARG GOPROXY=https://proxy.golang.org|direct
# space-separated list of plugins
ARG PLUGINS=github.com/lucaslorentz/caddy-docker-proxy/v2
ENV GOMODCACHE=/modcache
RUN --mount=type=cache,target=/modcache/cache <<"EOF"
xcaddy build ` echo "$PLUGINS" | xargs -r printf "--with %s " `
EOF
ARG VERSION=${VERSION}
FROM caddy:${VERSION}
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
ENTRYPOINT ["caddy"]
CMD ["docker-proxy"]
This setup shows two examples of how to proxy raw TCP traffic. Assuming you have IMAP server listening at 192.168.100.100:143 (it can be referenced by hostname too).
example1 uses layer4 plugin to forward traffic from port 9143 to the IMAP server. Notice that this approach requires you to mention this port in caddy service explicitly.
example2 is another approach I use sometimes. marcnuri/port-forward image is a wrapper around socat that forward from port 80 (by default) to whatever host:port you specify. In this case, you need to define port-forwarding for this service.
@Mikle-Bond Wow thanks for the explanation!
This looks good, I needed the explanation of the forwarding in the labels!
As you might know; IMAP needs sticky connections from a host to the same backend server when doing loadbalancing. I'm trying to figure out how I could accomplish this within docker swarm as my connection goes all over the place between the IMAP containers when using Traefik.
As IMAP is TCP I first need the TCP plugin for Caddy but then; the sticky part with this/your implementation. Would that be possible for Caddy you think ?
Ah. Well, I see that layer4 implements the IP-hash load balancing policy: https://github.com/mholt/caddy-l4/blob/78853879f66772f4363a3dbfcd7104e8672dbdb3/modules/l4proxy/loadbalancing.go#L268-L270
You could try adding this in labels of the IMAP service:
caddy.layer4.:143.proxy.to: "{{ upstreams 143 }}"
caddy.layer4.:143.proxy.lb_policy: ip_hash
Though this might not work if your caddy server doesn't receive real client IP and only sees ingress IP. I'm not sure how Swarm handles this. You will probably have to follow this advice too, and configure port 143 into host-mode forwarding on caddy service.
@Mikle-Bond thank you for the explanations above which were very helpful. I'm using Swarm mode, and I'm having trouble with the labels I am using not getting translated into Caddy-compliant layer4 Caddyfile entries. I've built a custom image with both caddy-docker-proxy and caddy-ext/layer4. Here's the Dockerfile:
FROM caddy:builder-alpine as builder
RUN xcaddy build \
--with github.com/lucaslorentz/caddy-docker-proxy/v2 \
--with github.com/RussellLuo/caddy-ext/layer4
FROM caddy:alpine
COPY --from=builder /usr/bin/caddy /bin/caddy
ENTRYPOINT ["/bin/caddy"]
CMD ["docker-proxy"]
I am running Caddy via Docker Compose in host network mode, not in Swarm mode, because I ultimately want Caddy to map specific IP addresses to services, and Swarm doesn't allow binding to specific IP addresses (because it defeats the purpose of scalable stacks across multiple hosts, though then again Kubernetes does allow that). Here's the Docker Compose YAML for that:
version: '3.3'
services:
caddy:
image: <image I built above in internal registry>
container_name: caddy
network_mode: host
environment:
- CADDY_INGRESS_NETWORKS=caddy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /volumes/caddy_data:/data
extra_hosts:
- host.docker.internal:host-gateway
restart: unless-stopped
And here's how I am using the labels in my Swarm YAML file for my deployment:
version: "3.8"
services:
mc:
...
deploy:
labels:
caddy.layer4.:6692.proxy.to: "{{upstreams 6692}}"
networks:
...
- caddy
With this setup, I get an error in the Caddy container:
{"level":"info","ts":1703087529.608864,"logger":"docker-proxy","msg":"Process Caddyfile","logs":"[ERROR] Removing invalid block: parsing caddyfile tokens for 'layer4': wrong argument count or unexpected line ending after 'to', at Caddyfile:5\n{\n\tlayer4 {\n\t\t:6692 {\n\t\t\tproxy {\n\t\t\t\tto\n\t\t\t}\n\t\t}\n\t}\n}\n\n"}
Expanding all the \n and \ts, this is what is getting entered in the Caddyfile:
{
layer4 {
:6692 {
proxy {
to
}
}
}
}
So my "{{upstreams 6692}}" is not getting processed into the Caddyfile. If I move the labels outside of the deploy entry, I get a different error:
{"level":"info","ts":1703088044.5198956,"logger":"docker-proxy","msg":"Process Caddyfile","logs":"[ERROR] Removing invalid block: parsing caddyfile tokens for 'layer4': wrong argument count or unexpected line ending after 'to', at Caddyfile:5\n{\n\tlayer4 {\n\t\t:6692 {\n\t\t\tproxy {\n\t\t\t\tto\n\t\t\t\tto 192.168.144.2:6692\n\t\t\t}\n\t\t}\n\t}\n}\n\n"}
Again expanding:
{
layer4 {
:6692 {
proxy {
to
to 192.168.144.2:6692
}
}
}
}
So this time the upstreams are getting included, but there is an additional blank to entry. If you (or anyone else) can cast any light on what I am doing wrong here, it would be much appreciated.
You might have an old copy of a container with the old label stopped but not removed. Make sure to wipe out all stopped containers. You can run docker system prune I think. Then down your stack and bring it back up.
@francislavoie thank for the response. In my original test, I completely removed the stack using docker stack rm and reprovisioned it using docker stack deploy -c <YAML-file>. I just removed it again, did a docker system prune like you suggested, and redeployed it with the labels under deploy. Here's the resulting config in the Caddy container:
{
layer4 {
:6692 {
proxy {
to
}
}
}
}
I moved the labels up a level (outside of deploy) as a test as well - removed the stack, did another docker system prune and redeployed the stack. This time, I got the same config as above, which is different than what I got the previous time (when I didn't do a docker system prune) - the previous time, it did insert the IP address after the to, but there was a second blank to which was causing an error.
I'm not sure of next steps here - I would love to have the ability to map host TCP ports to docker services without having to manually specify container IPs. I think my next step will be to run two separate Caddy instances, one with caddy-docker-proxy to automatically proxy web services, and one with layer4 which I will have to manually configure (which will be painful as I have 100s of these port mappings to manage).
If anyone has any other ideas about what I am doing wrong here, please let me know! Thanks in advance.
If I move the labels outside of the deploy entry, I get a different error
@smaccona did you move it or duplicated labels in 2 levels?
Probably the container network is not being detected as an ingress network by CDP. Try configuring ingress-networks as per readme. Maybe use "host" as network name, but I'm not sure if that works for host network.
Each docker container may have many IPs on many networks, CDP needs to know which networks/IPs it should use to reach the container.
Edit: just noticed you have CADDY_INGRESS_NETWORKS=caddy, try checking if your containers are all in that network. Docker swarm stack also tend to generate a different network name by adding the stack name to it as well when network is created as part of stack. Give the network a name in yaml file to prevent unexpected names
I was able to resolve this - my issue was that the caddy network was created in Swarm mode but I forgot to make it attachable. Thank you all for your help.
Closing as this can now be achieved using https://github.com/RussellLuo/caddy-ext/tree/master/layer4
If any feature from mholt/caddy-l4 can't still be configured in Caddyfile, I would suggest opening an issue or contributing to https://github.com/RussellLuo/caddy-ext/tree/master/layer4