caddy-docker-proxy
caddy-docker-proxy copied to clipboard
502 Bad Gateway
Hey! I want to expose my services with this proxy. Unfortunately, I get 502 bad gateway errors when I try to access anything.
That's my shows up in the logs:
{
"level": "error",
"ts": 1648811211.2824142,
"logger": "http.log.error",
"msg": "dial tcp 172.19.0.3:80: i/o timeout",
"request": {
"remote_addr": "172.19.0.1:38284",
"proto": "HTTP/2.0",
"method": "GET",
"host": "vault.example.com",
"uri": "/",
"headers": {
"Accept-Language": [
"en-CA,en-US;q=0.9,en;q=0.8"
],
"Accept-Encoding": [
"gzip"
],
"Cf-Ray": [
"6f50db56ea0b8e44-TLV"
],
"User-Agent": [
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.2 Safari/605.1.15"
],
"Accept": [
"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"
],
"X-Forwarded-Proto": [
"https"
],
"Cf-Visitor": [
"{\"scheme\":\"https\"}"
],
"Cf-Connecting-Ip": [
"93.173.34.192"
],
"Cdn-Loop": [
"cloudflare"
],
"Cf-Ipcountry": [
"IL"
],
"X-Forwarded-For": [
"93.173.34.192"
]
},
"tls": {
"resumed": false,
"version": 772,
"cipher_suite": 4865,
"proto": "h2",
"proto_mutual": true,
"server_name": "vault.example.com"
}
},
"duration": 10.007945743,
"status": 502,
"err_id": "bm8505ui2",
"err_trace": "reverseproxy.statusError (reverseproxy.go:886)"
}
Here are my docker-compose files:
/reserve-proxy/docker-compose.yml
version: "3.5"
services:
reverse-proxy:
image: lucaslorentz/caddy-docker-proxy:ci-alpine
container_name: reverse-proxy
networks:
- proxy-net
environment:
- CADDY_INGRESS_NETWORKS=proxy-net
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./certs:/certs:ro
- ./data:/data
ports:
- 80:80
- 443:443
labels:
caddy: :443
caddy.tls: "/certs/cert.pem /certs/cert.key"
restart: unless-stopped
networks:
proxy-net:
name: proxy-net
The certificates I added are generated by Cloudflare to secure the connection between my server and their's.
/vaultwarden/docker-compose.yml
version: "3.5"
services:
vaultwarden:
image: vaultwarden/server:latest
container_name: vaultwarden
environment:
- SIGNUPS_ALLOWED=false
volumes:
- ./data:/data
networks:
- proxy-net
labels:
caddy: vault.example.com
caddy.reverse_proxy: "{{upstreams 80}}"
restart: unless-stopped
networks:
proxy-net:
external: true
I'd appreciate any support, thank you!
I needed to add network_mode: host in order for it to work. I'd like to get a better solution that uses Docker networks if that's possible!
It's possible the container network name is either different or misconfigured - docker compose could have modified the container network name behind the scenes.
I needed to add network_mode: host in order for it to work. I'd like to get a better solution that uses Docker networks if that's possible!
This is further evidence that docker compose could have modified the container network name behind the scenes, as network_mode: host bypasses the docker networking
Can you create a repo that will reproduce this error with just a docker compose up?
I'm getting this too. I have to restart caddy, which is not ideal.
I can screen record and create example repo.
Using the standalone config.
create example repo.
a repo that will reproduce this error with just a docker compose up would really help, Thank You!
FWIW I don't get this on my local, single machine swarm (at least I haven't noticed it). Only on my staging environment with 2 nodes, a standalone (controller + proxy in one install) caddy service.
It is running behind cloudflare too (as @planecore example log shows).
So a little update, @planecore this might help you.
So creating a repo would likely not have done much good, because there are more moving parts in the environment that I was having this issue in. So I did a bit more testing and know more.
- I could consistently reproduce this.
-
- Start CDP only with local_certs.
-
- Deploy service whoami (this is on swarm) on whoami.example.com
-
- Access service whoami - OK
-
- Restart service
-
- Access service - 'Bad gateway' - for an indeterminate period (I haven't had it suddenly work).
What works fine is that New caddyfile code is run and generates, examined the json and it had the new internal IP of the service. Also looking at the localhost:2019/config , it returns correctly the new Caddyfile.
But...
- when I ssh into caddy service and wget the new_ip:port, I get the connection refused. So docker has assigned it, the service is 'up', but the IP/port is not responding. When wget service_name:port, it gets the service from the OLD ip!!
So I tried a couple of things:
- I set a health check - this didn't make a difference (because it was still trying to do it against the unresponsive IP).
- I set the service to 'start-first' on update and this WORKED! Best I can figure is that docker somehow doesn't update it's internal DNS correctly.
This is consistently working now, there maybe something that can be done in the CDP code to trigger a DNS update - I don't know. But a 'start-first' update policy helped me overcome my issue and will be default for my deployments now.
I can record the screen stuff I offered before if you still want to see it.
Hi all - I thought I had gotten this sorted with the 'start-first' update, but it is not consistently working. I get issues when the service has multiple networks (e.g. to a database network and routable). i also get issues when there are multiple replicas of the same service (some will update, some won't).
As a result of this, unfortunately I will be moving back to traefik as that is more stable at the moment.
I can consistently reproduce this on my machine with a very simple local docker-compose setup with a single compose file:
version: "3.7"
services:
caddy:
image: lucaslorentz/caddy-docker-proxy:ci-alpine
ports:
- 80:80
- 443:443
environment:
- CADDY_INGRESS_NETWORKS=caddy
networks:
- caddy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- caddy_data:/data
restart: unless-stopped
whoami:
image: jwilder/whoami
networks:
- caddy
labels:
caddy: mydomain.com
caddy.reverse_proxy: "{{upstreams 8000}}"
networks:
caddy:
volumes:
caddy_data: {}
@danog Docker-compose prefixes the network names they create with your folder name. Probabbly your CADDY_INGRESS_NETWORKS config is incorrect because of that. You can force it to use the network name you want by adding name to your network.
This works:
version: "3.7"
services:
caddy:
image: lucaslorentz/caddy-docker-proxy:ci-alpine
ports:
- 80:80
- 443:443
environment:
- CADDY_INGRESS_NETWORKS=caddy
networks:
- caddy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- caddy_data:/data
restart: unless-stopped
whoami:
image: jwilder/whoami
networks:
- caddy
labels:
caddy: mydomain.com
caddy.reverse_proxy: "{{upstreams 8000}}"
caddy.tls: internal
networks:
caddy:
name: caddy
volumes:
caddy_data: {}
Tested with:
docker compose up -d && sleep 5 && curl -kvL --resolve mydomain.com:80:127.0.0.1 --resolve mydomain.com:443:127.0.0.1 http://mydomain.com
Ayy, thanks, this works perfectly!