caddy-docker-proxy
caddy-docker-proxy copied to clipboard
Not working?
Hello,
I have been searching for something like this for a long time as it seems like it will be perfect to meet my needs, if I can get it to work.
I have a few docker containers that are websites with one of them the main website and the other sub-domain websites that work with the main site.
I followed the example and started up the
caddy/docker-compose.yml
then I went into whoami/docker-compose.yml
and changed the "example.com" parts to a domain that I own. Let's just call it "test.com" for these purposes and which is just a simple nginx web server with static pages.
version: '3.7'
services:
whoami:
image: jwilder/whoami
networks:
- caddy
labels:
caddy: whoami.test.com
caddy.reverse_proxy: " {{upstreams 8000}} "
networks:
caddy:
external: true
Saved it as whoami/docker-compose.yml and started that one up with docker compose as well.
Then I tried to go to the website.
https://whoami.test.com
but nothing seems to be happening and I get the message "This site can’t be reached" message which seems to indicate that there is no web server answering.
Can you please give me a bit of guidance and help on this?
- Are you absolutely sure your DNS is pointing to your server?
- Do you have ports 80 and 443 open?
- What are in your Caddy container's logs?
- Please use markdown code blocks when posting your config and logs, otherwise formatting gets all broken and it becomes very difficult to read.
Thanks for the quick response and I am absolutely sure about the DNS pointing to m VPS server.
If I run the docker container by itself
docker container run -d -p 8000:80 test
then I can easily go to http://www.test.com:8000 and see the pages.
Digging through the Caddy container's logs to see if there is any indication of a problem, but none so far.
I will have to figure out how to use markdown code blocks for the configs and logs. Sorry that the formatting got messed up.
Basically, I am just trying to get the simplest example to work so that I can get a fell for things first and was just trying a simple Nginx server and your Caddy-docker-proxy.
I will have to figure out how to use markdown code blocks for the configs and logs. Sorry that the formatting got messed up.
Markdown code blocks start with ``` on its own line, then closed by another set of triple backticks on its own line at the end.
https://docs.github.com/en/github/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax#quoting-code
I won't be able to do much else to help without seeing your log output, full commands being run, full configs, etc. You've only shared a narrow picture of the setup, so I can only make assumptions otherwise.
Let me try it again and I will use another domain for testing but it is basically exactly what your posted basic example is with the only change of that being replacing the "example.com" with my real domain.
I will try again and see if I can get a fresh log.
Thanks on the markdown information a I just learned about those 3 tick marks and will use that properly now.
Well, I was able to get your whoami example working as it replied:
I'm 6f54325e0a35
Maybe my nginx server problem is with the "upstreams 8000" and I may need to read up on this to find out about a static web server since I think that the internal port is 80 in this case.
Will do more reading and research. Thanks again.
Maybe my nginx server problem is with the "upstreams 8000" and I may need to read up on this to find out about a static web server since I think that the internal port is 80 in this case.
Yeah that makes sense, if the nginx container is listening to 80
inside the container network, then you need to use {{upstreams 80}}
to connect to it, not the port number bound to the host.
The {{upstreams}}
placeholder is basically a function that outputs the address to the container, at the given port. So it will attempt to communicate with the container within the docker network.
Hello,
Unfortunately, I am still running into the same problem of site not being available when I run it though your CDP but can find it if I run the docker container directly. I am sure that this is a configuration problem and maybe I have not yet completely grasped the flow, although will continue to read up on Caddy and your Github CDP information.
my simple Nginx server uses port 80 and I tried to modify your whoami docker-compose.yml file to see if I could get it to work.
version: '3.7'
services:
whoami:
image: nextbit
networks:
- caddy
labels:
caddy: rhodyn.com www.rhodyn.com
caddy.reverse_proxy: "{{upstreams 80}}"
networks:
caddy:
external: true
I have also tried to attach the Caddy container logs to this pose to see if that help
``
5b0f702ded080caad0f3389ae6d824c00994cb0432ddd533585ff18fbac3dc03-json.log
Hopefully you will see something simple that I have missed. Thanks in advance.
The error message is:
dial tcp 172.18.0.3:80: connect: connection refused
So this makes me think that container isn't actually accepting connections on port 80.
This is very strange since if I turn off Caddy and the nextbit docker-compose (down) with nothing running at all and then do:
docker container run -d -p 8000:80 nextbit
Then, I can goto:
http://www.rhodyn.com:8000/
andthe test pages come up with no problem so the container is definitely serving on port 80 with that basic call.
Do I need to pass the port in the docker-compose.yml perhaps, although I thought that the "upstreams 80" did that part and makes the connection to the container?
Maybe I have a wrong "label" in the previously posted nextbit docker-compose.yml that is not letting caddy see it.
On Thu, Jan 6, 2022 at 10:17 AM Francis Lavoie @.***> wrote:
The error message is:
dial tcp 172.18.0.3:80: connect: connection refused
So this makes me think that container isn't actually accepting connections on port 80.
— Reply to this email directly, view it on GitHub https://github.com/lucaslorentz/caddy-docker-proxy/issues/330#issuecomment-1006671065, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADUOZIVPR2WAEKS437DWPCDUUWW63ANCNFSM5LLBTH2Q . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you authored the thread.Message ID: @.***>
I just tried to start up things again.
br-ec8067d0ea7b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.19.0.1 netmask 255.255.0.0 broadcast 172.19.255.255
ether 02:42:1d:0b:34:25 txqueuelen 0 (Ethernet)
RX packets 37 bytes 6057 (6.0 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 34 bytes 2938 (2.9 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:14:82:a9:fb txqueuelen 0 (Ethernet)
RX packets 738988 bytes 84818953 (84.8 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 645993 bytes 419311003 (419.3 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Also, what is interesting is that the latest connection refused
{"level":"error","ts":1641486517.733862,"logger":"http.log.error","msg":"dial tcp 172.19.0.3:80: connect: connection refused"
All within the Docker network.
I am still searching and digging into things though.
Turns out that the Caddy network (172.19.x.x) is the problem:
nextbit$ docker network inspect ec8067d0ea7b
[
{
"Name": "caddy",
"Id": "ec8067d0ea7b2e7a88eae01627f5f000c03ed720310e4d4d3b60da213bce4f21",
"Created": "2022-01-06T16:58:32.72934831+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4d459457a0dc5b061466b8bd4ffb78a0639186d8059179b6620db4c1a7778694": {
"Name": "nextbit_nextbit_1",
"EndpointID": "29d6e355804dc119a55e46c7c07b61bd9f038609faab9910b088ae2cf1685caa",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"68d4259d2252fc1eabe5a93054eaadfba51fe3e0ebb0676ae472078e924bdf5c": {
"Name": "caddy_caddy_1",
"EndpointID": "e97ef1c68c0207424b055fbf3f16631b65b15c658fd36e11bf3132796734b7b4",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Not sure how to fix it though.
Can you try connecting to caddy container via docker exec
and from there curl your nginx using same IP and port from logs? You might need to switch to alpine images to do that. And apk --no-cache add curl
to get curl.
From the logs it looks like caddy identified everything properly, and tried to connect to the right IP and port. But indeed there is something blocking the connection. Don't know what yet.
Can you also share nginx config? Is it using http2 over port 80? Something like listen 80 http2
. Double check in chrome traffic when you access it without CDP if it is HTTP1 or HTTP2
@lonnietc I'm curious - what is your host OS and version?
Hi All,
I was just actually going to ask that same question as for people that are running it successfully.
I am running Ubuntu Ubuntu 20.04.3 LTS (64-bit) and Docker version 20.10.12, build e91ed57 on these VPS systems
On Thu, Jan 6, 2022 at 1:17 PM Kevin Searle @.***> wrote:
@lonnietc https://github.com/lonnietc I'm curious - what is your host OS and version?
— Reply to this email directly, view it on GitHub https://github.com/lucaslorentz/caddy-docker-proxy/issues/330#issuecomment-1006810307, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADUOZIT6L2NLQAKM4WNMYUTUUXMETANCNFSM5LLBTH2Q . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID: @.***>
Hi All,
For the specs on my setup here:
OS: Ubuntu 20.04.3 LTS (64-bit)
Docker: 20.10.12, build e91ed57
Docker-Compose: docker-compose version 1.29.1, build c34c88b2
Caddy-Docker-Proxy: lucaslorentz/caddy-docker-proxy:ci-alpine
I would be interested to know what are some of the other specifications that seem to be working well for others as I am actually wondering if it might be the version of Docker-Compose that might be the problem, as one thought.
Hello,
For the Nginx config, I have the following:
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
ssi on;
}
Also, I tried with another docker container "Drupal" that I grabbed from Docker Hub and the same type of results.
Runs perfectly, outside of the Caddy-Docker-Proxy setup but will not work from inside.
Additionally, I also upgraded to the Docker-Compose v2.2.3 (latest) and still the same outcome.
Maybe it is the docker engine itself or similar.
@lonnietc I don't have much experience with nginx. But it seems to be that server_name filters it to only accept requests to that domain. It is documented here: http://nginx.org/en/docs/http/server_names.html
Does it work if you change it to server_name _;
?
What are the OS and versions that you are running?
I was just testing again and looking thought the logs after the latest attempt with other containers and do not see an connection refused but they just do not come up either.
I have no idea what is happening.
Perhaps I will try with some other type of non-web server container and see what happens.
nextbit$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
afd78c3f038d nextbit "/docker-entrypoint.…" About a minute ago Up About a minute 80/tcp nextbit-nextbit-1
4ba0b3dc4756 lucaslorentz/caddy-docker-proxy "/bin/caddy docker-p…" 3 minutes ago Up 3 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 2019/tcp caddy-caddy-1
lonnie@vmi380646:~/test/nextbit$ docker logs 4ba0b3dc4756
{"level":"info","ts":1641502073.6277328,"logger":"docker-proxy","msg":"Running caddy proxy server"}
{"level":"info","ts":1641502073.6377616,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["localhost:2019","[::1]:2019","127.0.0.1:2019"]}
{"level":"info","ts":1641502073.639328,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1641502073.6396415,"logger":"docker-proxy","msg":"Running caddy proxy controller"}
{"level":"info","ts":1641502073.6422234,"logger":"docker-proxy","msg":"Start","CaddyfilePath":"","LabelPrefix":"caddy","PollingInterval":30,"ProcessCaddyfile":true,"ProxyServiceTasks":true,"IngressNetworks":"[caddy]"}
{"level":"info","ts":1641502073.6459103,"logger":"docker-proxy","msg":"Connecting to docker events"}
{"level":"info","ts":1641502073.6470573,"logger":"docker-proxy","msg":"IngressNetworksMap","ingres":"map[f6cb45b4d74dcd3005b73aaa34b0e3dbe47aa09f54f76f933cf8d3b3c9744f9d:true]"}
{"level":"info","ts":1641502073.6730535,"logger":"docker-proxy","msg":"Swarm is available","new":false}
{"level":"info","ts":1641502073.6731305,"logger":"docker-proxy","msg":"Skipping default Caddyfile because no path is set"}
{"level":"info","ts":1641502073.6731415,"logger":"docker-proxy","msg":"Skipping swarm config caddyfiles because swarm is not available"}
{"level":"info","ts":1641502073.6754496,"logger":"docker-proxy","msg":"Skipping swarm services because swarm is not available"}
{"level":"info","ts":1641502073.675498,"logger":"docker-proxy","msg":"New Caddyfile","caddyfile":"# Empty caddyfile"}
{"level":"warn","ts":1641502073.6756413,"logger":"docker-proxy","msg":"Caddyfile to json warning","warn":"[Caddyfile:1: input is not formatted with 'caddy fmt']"}
{"level":"info","ts":1641502073.675668,"logger":"docker-proxy","msg":"New Config JSON","json":"{}"}
{"level":"info","ts":1641502073.6758416,"logger":"docker-proxy","msg":"Sending configuration to","server":"localhost"}
{"level":"info","ts":1641502073.6780636,"logger":"admin.api","msg":"received request","method":"POST","host":"localhost:2019","uri":"/load","remote_addr":"127.0.0.1:48360","headers":{"Accept-Encoding":["gzip"],"Content-Length":["41"],"Content-Type":["application/json"],"User-Agent":["Go-http-client/1.1"]}}
{"level":"info","ts":1641502073.6782212,"logger":"admin.api","msg":"config is unchanged"}
{"level":"info","ts":1641502073.6782744,"logger":"admin.api","msg":"load complete"}
{"level":"info","ts":1641502073.67849,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
{"level":"info","ts":1641502103.649594,"logger":"docker-proxy","msg":"Skipping default Caddyfile because no path is set"}
{"level":"info","ts":1641502103.649679,"logger":"docker-proxy","msg":"Skipping swarm config caddyfiles because swarm is not available"}
{"level":"info","ts":1641502103.6525846,"logger":"docker-proxy","msg":"Skipping swarm services because swarm is not available"}
{"level":"info","ts":1641502133.7101336,"logger":"docker-proxy","msg":"Skipping default Caddyfile because no path is set"}
{"level":"info","ts":1641502133.710194,"logger":"docker-proxy","msg":"Skipping swarm config caddyfiles because swarm is not available"}
{"level":"info","ts":1641502133.7124658,"logger":"docker-proxy","msg":"Skipping swarm services because swarm is not available"}
{"level":"info","ts":1641502163.4820263,"logger":"docker-proxy","msg":"Skipping default Caddyfile because no path is set"}
{"level":"info","ts":1641502163.4821851,"logger":"docker-proxy","msg":"Skipping swarm config caddyfiles because swarm is not available"}
{"level":"info","ts":1641502163.484533,"logger":"docker-proxy","msg":"Skipping swarm services because swarm is not available"}
{"level":"info","ts":1641502164.644434,"logger":"docker-proxy","msg":"Skipping default Caddyfile because no path is set"}
{"level":"info","ts":1641502164.6445053,"logger":"docker-proxy","msg":"Skipping swarm config caddyfiles because swarm is not available"}
{"level":"info","ts":1641502164.648218,"logger":"docker-proxy","msg":"Skipping swarm services because swarm is not available"}
{"level":"info","ts":1641502194.6752725,"logger":"docker-proxy","msg":"Skipping default Caddyfile because no path is set"}
{"level":"info","ts":1641502194.675378,"logger":"docker-proxy","msg":"Skipping swarm config caddyfiles because swarm is not available"}
{"level":"info","ts":1641502194.6783721,"logger":"docker-proxy","msg":"Skipping swarm services because swarm is not available"}
{"level":"info","ts":1641502224.6453354,"logger":"docker-proxy","msg":"Skipping default Caddyfile because no path is set"}
{"level":"info","ts":1641502224.6454527,"logger":"docker-proxy","msg":"Skipping swarm config caddyfiles because swarm is not available"}
{"level":"info","ts":1641502224.6492038,"logger":"docker-proxy","msg":"Skipping swarm services because swarm is not available"}
{"level":"info","ts":1641502254.6456537,"logger":"docker-proxy","msg":"Skipping default Caddyfile because no path is set"}
{"level":"info","ts":1641502254.6457388,"logger":"docker-proxy","msg":"Skipping swarm config caddyfiles because swarm is not available"}
{"level":"info","ts":1641502254.649143,"logger":"docker-proxy","msg":"Skipping swarm services because swarm is not available"}
Hi All,
Just had some MIXED Success !!!
I decided to try some other container besides the Nginx web server based ones. I grabbed a RethinkDB from Docker Hub (https://hub.docker.com/_/rethinkdb) and set up almost the exact same docker-compose.yml as the previous Nginx one:
# Proxy with matches and route
version: '3.7'
services:
rethinkdb:
image: rethinkdb
networks:
- caddy
labels:
caddy: rhodyn.com www.rhodyn.com
caddy.reverse_proxy: "{{upstreams 8080}}"
networks:
caddy:
external: true
Then started it up as normal and to my great surprise, it worked when I went to the test url "https://www.rhodyn.com"
Then, I took that docker-compose file and copied it over and made just a couple of changes to re-test the Nginx container using an known to be working sub-domain name "https://test.rhodyn.com":
# Proxy with matches and route
version: '3.7'
services:
test:
image: nextbit
networks:
- caddy
labels:
caddy: test.rhodyn.com
caddy.reverse_proxy: "{{upstreams 80}}"
networks:
caddy:
external: true
But, this time it did NOT work and the log files are now showing the "connection refused" message again:
{"level":"error","ts":1641503430.913562,"logger":"http.log.error","msg":"dial tcp 172.20.0.4:80: connect: connection refused","request":{"remote_addr":"69.140.233.83:2697","proto":"HTTP/2.0","method":"GET","host":"test.rhodyn.com","uri":"/","headers":{"Sec-Fetch-Mode":["navigate"],"Sec-Fetch-User":["?1"],"Sec-Fetch-Dest":["document"],"Accept-Encoding":["gzip, deflate, br"],"Accept-Language":["en-US,en;q=0.9"],"Sec-Ch-Ua":["\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"96\", \"Google Chrome\";v=\"96\""],"Sec-Ch-Ua-Platform":["\"Windows\""],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36"],"Sec-Fetch-Site":["none"],"Sec-Gpc":["1"],"Sec-Ch-Ua-Mobile":["?0"],"Upgrade-Insecure-Requests":["1"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","proto_mutual":true,"server_name":"test.rhodyn.com"}},"duration":0.000926359,"status":502,"err_id":"qhuhhm1p5","err_trace":"reverseproxy.statusError (reverseproxy.go:858)"}
{"level":"info","ts":1641503455.4088128,"logger":"docker-proxy","msg":"Skipping default Caddyfile because no path is set"}
{"level":"info","ts":1641503455.409093,"logger":"docker-proxy","msg":"Skipping swarm config caddyfiles because swarm is not available"}
{"level":"info","ts":1641503455.4144835,"logger":"docker-proxy","msg":"Skipping swarm services because swarm is not available"}
Also, the RethinkDB is still working and resolving just fine.
I have to conclude that there may be an issue in the Caddy-Docker-Proxy (CDP) that is not completing the link to connect to either "port 80" of a container or maybe not finding a proper label somewhere.
Perhaps CDP is getting confused with the port-80 of itself and those of other containers for the "UPTREAMS 80" or something.
I do not really know at this point.
Maybe I can try to use another type of web server container with my pages to see what is happening.
What web servers are others using that work with Caddy-Docker-Proxy and I might try one of those?
@lonnietc
I've just read nginx doc again. Don't try server_name _;
as I mentioned above.
Try changing:
listen 80;
to:
listen 80 default_server;
I think this should fix your problem. Maybe you always access nginx using localhost and it works, that's why you have the impression your nginx config was right. It only fails when you access it from a different domain.
https://stackoverflow.com/questions/9454764/nginx-server-name-wildcard-or-catch-all
Just been trying this for a while with as well as reading over the documentation but still no luck.
I am going to attempt to try another type of web server to see if it is the same results.
Hi All,
I now have SUCCESS !!!!!
Turns out, from what I can see, is that the Nginx server cannot listen on port 80 so I put it on port 8080 and it cane right up.
@lucaslorentz I tried the changes that you mentioned and had the same results of no connection so I just kept the Nginx conf the same but only changed the port to 8080 and no other changes to the web server configurations at all:
server {
listen 8080;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
ssi on;
}
Originally, I also tested with changing having the Nginx port 80 and docker-compose.yml "updtreams 80" but that did not work so I changed everything to have the Nginx port 8080 and docker-compose.yml "updtreams 8080" and it worked.
Of course, after changing the default.conf to "listen 8080" then I had to make a fresh image "docker build -t nextbit ." but that was no real problem each time.
After analyzing all of this, it "seems" to me that there is a problem in the caddy-docker-proxy code which is somehow conflicting with port 80 of the containers. I could, of course, be wrong here, but that is what it really seems like to me given the test cases that I have run and data collected so far.
It might be worth investigating more to either confirm that this is true, or not.
Also, I just wanted to Thank You so very much for all of the effort and help to get this working and running. Much appreciated, my friends