docker-gen
docker-gen copied to clipboard
no servers are inside upstream in `/etc/nginx/conf.d/default.conf:34`
I've been working on settings up a docker instance which will host multiple containers using jwilder/docker-gen, jrcs/letsencrypt-nginx-proxy-companion and the official nginx container.
I've setup all of the containers using these instructions, however there seems to be an issue here which I think is most likely related to docker-gen. I might be wrong here, and am happy to stand corrected.
Once all three containers are running I am launching a docker-registry container which listens on port 5000 which the following environment variables set:
-
VIRTUAL_PORT=5000
-
VIRTUAL_HOST=registry.danielgroves.net
I've intentionally not set the lets encrypt environmental variables yet, as I'd like to get this working without before adding the additional complexity. When I hit the given URL I get a 503 back from nginx and when running docker logs nginx
can see the that it can't see any upstream servers: 2016/05/25 10:40:47 [emerg] 1#1: no servers are inside upstream in /etc/nginx/conf.d/default.conf:34
.
I've included the contents of the default.conf
file at the bottom of this issue.
As far as I can tell docker-gen isn't picking up the container properly, despite registering the exposed port with it, and thus isn't populating the upstream block properly. I've probably done something stupid, but it would be great if someone could confirm. There's no errors in the logs for the docker-gem container, but I will include the output here as well.
Is there something I'm missing? This is the full command I've used to launch my registry container:
docker run -d --name registry --restart always --expose 5000 -v /opt/registry:/etc/docker/registry:ro -e VIRTUAL_HOST=registry.danielgroves.net -e VIRTUAL_PORT=5000 registry_s3
default.conf
:
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
upstream registry.danielgroves.net {
}
server {
server_name registry.danielgroves.net;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
return 301 https://$host$request_uri;
}
server {
server_name registry.danielgroves.net;
listen 443 ssl http2 ;
access_log /var/log/nginx/access.log vhost;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA;
ssl_prefer_server_ciphers on;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
ssl_certificate /etc/nginx/certs/registry.danielgroves.net.crt;
ssl_certificate_key /etc/nginx/certs/registry.danielgroves.net.key;
ssl_dhparam /etc/nginx/certs/registry.danielgroves.net.dhparam.pem;
add_header Strict-Transport-Security "max-age=31536000";
include /etc/nginx/vhost.d/default;
location / {
proxy_pass http://registry.danielgroves.net;
}
}
docker-gen
logs:
2016/05/25 10:40:47 Generated '/etc/nginx/conf.d/default.conf' from 2 containers
2016/05/25 10:40:47 Sending container 'nginx' signal '1'
2016/05/25 10:40:47 Watching docker events
2016/05/25 10:40:47 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
2016/05/25 10:41:16 Received event start for container 991b1d987b68
2016/05/25 10:41:16 Received signal: hangup
2016/05/25 10:41:16 Received signal: hangup
2016/05/25 10:41:16 Generated '/etc/nginx/conf.d/default.conf' from 2 containers
2016/05/25 10:41:16 Sending container 'nginx' signal '1'
2016/05/25 10:41:21 Debounce minTimer fired
2016/05/25 10:41:21 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
2016/05/25 10:42:37 Received event die for container 60b6a7a736c5
2016/05/25 10:42:37 Received event stop for container 60b6a7a736c5
2016/05/25 10:42:42 Debounce minTimer fired
2016/05/25 10:42:42 Generated '/etc/nginx/conf.d/default.conf' from 1 containers
2016/05/25 10:42:42 Sending container 'nginx' signal '1'
2016/05/25 10:44:57 Received event start for container 7a2b1fc74686
2016/05/25 10:45:02 Debounce minTimer fired
2016/05/25 10:45:02 Generated '/etc/nginx/conf.d/default.conf' from 2 containers
2016/05/25 10:45:02 Sending container 'nginx' signal '1'
2016/05/26 08:57:16 Received event die for container 7a2b1fc74686
2016/05/26 08:57:16 Received event stop for container 7a2b1fc74686
2016/05/26 08:57:17 Received event start for container 2ea8b67292d6
2016/05/26 08:57:22 Debounce minTimer fired
2016/05/26 08:57:22 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
2016/05/26 09:00:27 Received event die for container 2ea8b67292d6
2016/05/26 09:00:28 Received event stop for container 2ea8b67292d6
2016/05/26 09:00:28 Received event start for container 81131159c395
2016/05/26 09:00:33 Debounce minTimer fired
2016/05/26 09:00:33 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
2016/05/26 09:03:00 Received event die for container 81131159c395
2016/05/26 09:03:00 Received event stop for container 81131159c395
2016/05/26 09:03:01 Received event start for container 44867e65b8ff
2016/05/26 09:03:06 Debounce minTimer fired
2016/05/26 09:03:06 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
I have the same issue but I guess it is more related to the nginx.tmpl file of nginx-proxy and not a docker-gen issue.
I'm also seeing this when spinning up a Zabbix 3 server with this docker-compose version 2 file:
https://github.com/zembutsu/zabbix-docker/blob/master/docker-compose.yml
(just changing port 80 as expose instead of ports)
I add to the zabbix-server environment variables the usual VIRTUAL_HOST=zabbix.mydomain.com, and after starting up the containers I get this same error.
I try to do the same, and end up with this:
- Using nginx-proxy, everything goes fine.
- Using nginx and docker-gen separetely, the generate conf contains an empty upstream section.
I check if there is any difference in the tmpl and cannot find any.
I'm not sure how to check further (like how to print the structure) but the case is very easy to reproduce. Any pointer on how I can help on this ?
OK, I have an idea and it happens to work (but I'm not sure why): I removed the -exposed-only from the docker-gen options and it starts working. Note that:
- The target docker has an exposed port
- nginx-proxy work with exactly the same container
For reference, I test it with the following docker command:
docker run -d --expose=8000 -e VIRTUAL_HOST=
I have similar issues and can confirm the observations by @ninoles
Precisely the same here. Works with nginx-proxy image, not with separate nginx <> docker-gen containers.
I think it is possibly related to the latest changes from PR #158. What I found out, is that $CurrentContainer
is not correctly resolved.
docker-compose.yml
version: '2'
services:
proxy-gen:
image: jwilder/docker-gen
container_name: proxy-gen
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./volumes/templates:/etc/docker-gen/templates:ro
- ./volumes/config:/etc/docker-gen/config:ro
- ./volumes/log:/etc/docker-gen/log:rw
command: -watch -only-exposed /etc/docker-gen/templates/debug.tmpl /etc/docker-gen/log/debug.log
volumes/templates/debug.tmpl
>> DEBUG LOG START
{{ $CurrentContainer := where $ "ID" .Docker.CurrentContainerID | first }}
.Docker.CurrentContainerID: {{ .Docker.CurrentContainerID }}
CurrentContainer: {{ $CurrentContainer }}
Known networks:
{{ range $knownNetwork := $CurrentContainer.Networks }}
- {{ $knownNetwork }}
{{ end }}
{{ range $host, $containers := groupByMulti $ "Env.VIRTUAL_HOST" "," }}
Iterating {{ $host }} containers
{{ range $container := $containers }}
{{ $addrLen := len $container.Addresses }}
Name: {{ $container.Name }}
Exposed ports: {{ range $index, $address := $container.Addresses }} {{ if $index }},{{ end }} {{ $address.Port }} {{ end }}
Networks:
{{ range $containerNetwork := $container.Networks }}
- {{ $containerNetwork }}
{{ end }}
{{ end }}
End {{ $host }} containers
{{ end }}
>> DEBUG LOG END
In my example outputs I always find: proxy-gen | CurrentContainer: <no value>
The .Docker.CurrentContainerID
is correctly resolved: proxy-gen | .Docker.CurrentContainerID: 0cd78119fcdce135d5e57c177650382c5a42e7fc501b49749e6fb173504ff81c
As far as I understand, the nginx.tmpl
compares the $CurrentContainer
's networks with the host (VIRTUAL_HOST=docker-gen
container to the host container's network. Given that $CurrentContainer
, fetched by {{ $CurrentContainer := where $ "ID" .Docker.CurrentContainerID | first }}
is not null. Anybody any idea, why this expression is null?
Not really progress here. But it seems as if the docker-gen container itself is not part of the type Context []*RuntimeContainer
.
debug.tmpl
{{ range $runtimeContainer := $ }}
ID: {{ $runtimeContainer.ID }}
{{ end }}
{{ $CurrentContainer := where $ "ID" .Docker.CurrentContainerID | first }}
Output:
proxy-gen | ID: 66d8cb1183a3d31cd62feb24fd26945e253c39839c6f4e6da8ca3356c0ea1538
proxy-gen | ID: 199b4a1d6f333218b0e8ee7135c0777b3d97e4c883461107ccc1e98671ae6c2e
proxy-gen | .Docker.CurrentContainerID: 650033b370904f7f827a64b7331a8471e47194481e2ba73f674508a42722980e
docker ps
shows these container IDs running
66d8cb1183a3
650033b37090
199b4a1d6f33
I opened an issue on the actual nginx-proxy repo since srt is probably right, this is probably a nginx-proxy issue, rather than a docker-gen issue: #479
I had a similar issue, and the problem was that docker-compose default behavior is to create a separate network for each docker-compose.yml and add all the services from that file to this new network. This gives you some nice insulation between different sets of services, but nginx-proxy is designed to be shared across multiple sets of services. My quick fix was to create a network called nginx-proxy:
docker create network nginx-proxy
Then I added the following to the bottom of my docker-compose file:
networks:
default:
external:
name: nginx-proxy
Here is my complete docker-compose.yml:
version: '2'
services:
prod:
image: ${COMPOSE_PROJECT_NAME}-prod
container_name: ${COMPOSE_PROJECT_NAME}-prod
environment:
- VIRTUAL_HOST=${COMPOSE_PROJECT_NAME}
volumes:
- ./prod/${COMPOSE_PROJECT_NAME}/web:/var/www/html/
- ./prod/${COMPOSE_PROJECT_NAME}/data:/var/lib/mysql
restart: always
dev:
image: ${COMPOSE_PROJECT_NAME}-dev
container_name: ${COMPOSE_PROJECT_NAME}-dev
environment:
- VIRTUAL_HOST=${COMPOSE_PROJECT_NAME}.dev
volumes:
- ./dev/${COMPOSE_PROJECT_NAME}/web:/var/www/html
- ./dev/${COMPOSE_PROJECT_NAME}/data:/var/lib/mysql
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
default:
external:
name: nginx-proxy
For the new networking feature for docker-compose "version 2", another solution is to use the network_mode: "bridge" as well as networks: default: external: name: bridge. That's help me with some apps that are using their own scripts for creating their containers (discourse in my case).
I use my own nginx image together with docker-gen. I ran into this problem today with no upstreams being written. I removed exposed-only
as suggested by @ninoles, and it worked.
Removing exposed-only
didn't help. I'm still getting no servers are inside upstream in /etc/nginx/conf.d/default.conf:36
Edit: works with nginx-proxy alone ~~Same issue with nginx-proxy
alone or nginx
and docker-gen
separetely.~~
@briansrepo's solution fixed the issue for me, by attaching the jwilder/nginx-proxy container to a proxy network, and then using that as the default network in the docker-compose.yml.
Just confirming @briansrepo's solution. Thanks a lot !
I tried first to rely on the default bridge
networks:
default:
external:
name: bridge
but got a Nginx refusal:
$ docker-compose up
ERROR: for nginx Network-scoped alias is supported only for containers in user defined networks
When using my own name, and then creating the network
networks:
default:
external:
name: nginx-proxy
everything's fine
$ docker network create nginx-proxy
$ docker-compose up
...
i just want to point out that @briansrepo's solution might work but it is unrelated to the OPs issue. His fix is only valid if you are using the jwilder/nginx-proxy image which combines nginx and docker-gen. This issue though is appearing while using two containers, one for docker-gen and one for nginx.
i just ran into this after i decided to split my setup by using two seperate containers rather than one based on the combined image jwilder/nginx-proxy and i can confirm that @ninoles' suggestion works around the issue. By removing -only-exposed
from the docker-gen command my nginx-config is generated properly.
Any updates here?
I am recently having major problems with the separate-container solution (the official nginx image, jwilder/docker-gen and JrCs/docker-letsencrypt-nginx-proxy-companion).
It appears as the automatically generated upstream blocks are faulty (entirely empty).
I have looked around and tried the following proposed solutions:
- Removíng "-only-exposed" frin the docker-gen entrypoint.
- Change lets-encrypt-proxy-companion to version 1.4
=> No success - empty upstream blocks in both cases.
I am using the following official nginx.tmpl: https://raw.githubusercontent.com/jwilder/nginx-proxy/master/nginx.tmpl
So currently, I cannot get any of the example setups (see bottom of https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion) working using the recommended separate container method. Am I the only one with these issues?
My best guess is that there is a problem with the nginx.tmpl file, does anyone know if it is up-to-date? Alternatively, could there be a port issues? The proxy is tied to my api's by a bridge network.
I am using:
Docker version 17.04.0-ce, build 4845c56 docker-compose version 1.11.2, build dfed245
@victor-lund Are all your containers under the same network? docker network create nginxproxy
and then add --net nginxproxy
to all containers.
I have also just started having problems. I have been using nginx/docker-gen/let's encrypt for several months and something change that has rendered everything useless
I'm getting the same error here.
nginx | 2017/05/26 12:00:25 [emerg] 1#1: no servers are inside upstream in /etc/nginx/conf.d/default.conf:51
The versions are :
Docker version 17.05.0-ce, build 89658be
docker-compose version 1.13.0, build 1719ceb
jrcs/letsencrypt-nginx-proxy-companion:5176f3ad3949
jwilder/docker-gen:090b9fb87a9a
Hi! I just setup a new VPS (Ubuntu 16.04) for a web project... I installed docker (17.05) and docker-compose (1.15) and copied my file on the server. When I run docker compose now, the nginx-proxy container is failing and I get :
nginx: [emerg] no host in upstream ":80" in /etc/nginx/conf.d/default.conf:35
The weird thing is that the exact same setup works fine on my local machine and 2 other VPS. Here is my docker-compose.yml :
version: "2"
services:
dentsblanches-nginx-proxy:
build: ./nginx-proxy/
container_name: dentsblanches-nginx-proxy
ports:
- "80:80"
volumes:
- "/etc/nginx/conf.d"
- "/etc/nginx/vhost.d"
- "/usr/share/nginx/html"
- "./nginx-proxy/certs:/etc/nginx/certs:ro"
networks:
- server
depends_on:
- dentsblanches-php
dentsblanches-nginx-gen:
image: jwilder/docker-gen
container_name: dentsblanches-nginx-gen
volumes:
- "/var/run/docker.sock:/tmp/docker.sock:ro"
- "./nginx-proxy/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro"
volumes_from:
- dentsblanches-nginx-proxy
entrypoint: /usr/local/bin/docker-gen -notify-sighup dentsblanches-nginx-proxy -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
dentsblanches-nginx:
build: ./dentsblanches/nginx/
container_name: dentsblanches-nginx
environment:
- VIRTUAL_HOST=VPS-ip
- VIRTUAL_NETWORK=nginx-proxy
- VIRTUAL_PORT=80
volumes:
- ./dentsblanches/:/var/www/html
networks:
- server
depends_on:
- dentsblanches-php
dentsblanches-php:
build: ./dentsblanches/php/
container_name: dentsblanches-php
environment:
- VIRTUAL_PORT=9000
volumes:
- ./dentsblanches/:/var/www/html
networks:
- database
- server
depends_on:
- dentsblanches-mysql
dentsblanches-mysql:
image: mysql:latest
container_name: dentsblanches-mysql
volumes:
- data:/var/lib/mysql
networks:
- database
environment:
MYSQL_ROOT_PASSWORD: rootpasswd
MYSQL_DATABASE: dentsblanches
MYSQL_USER: dbuser
MYSQL_PASSWORD: dbuserpasswd
dentsblanches-phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: dentsblanches-phpmyadmin
ports:
- 8080:80
networks:
- database
depends_on:
- dentsblanches-mysql
environment:
PMA_HOST: dentsblanches-mysql
volumes:
data:
networks:
database:
server:
external:
name: nginx-proxy
And here is my nginx.tmpl
{{ define "upstream" }}
{{ if .Address }}
{{/* If we got the containers from swarm and this container's port is published to host, use host IP:PORT */}}
{{ if and .Container.Node.ID .Address.HostPort }}
# {{ .Container.Node.Name }}/{{ .Container.Name }}
server {{ .Container.Node.Address.IP }}:{{ .Address.HostPort }};
{{/* If there is no swarm node or the port is not published on host, use container's IP:PORT */}}
{{ else }}
# {{ .Container.Name }}
server {{ .Address.IP }}:{{ .Address.Port }};
{{ end }}
{{ else }}
# {{ .Container.Name }}
server {{ .Container.IP }} down;
{{ end }}
{{ end }}
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
{{ if (exists "/etc/nginx/proxy.conf") }}
include /etc/nginx/proxy.conf;
{{ else }}
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
{{ end }}
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
{{ if (and (exists "/etc/nginx/certs/default.crt") (exists "/etc/nginx/certs/default.key")) }}
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 443 ssl http2;
access_log /var/log/nginx/access.log vhost;
return 503;
ssl_certificate /etc/nginx/certs/default.crt;
ssl_certificate_key /etc/nginx/certs/default.key;
}
{{ end }}
{{ range $host, $containers := groupByMulti $ "Env.VIRTUAL_HOST" "," }}
upstream {{ $host }} {
{{ range $index, $value := $containers }}
{{ $addrLen := len $value.Addresses }}
{{/* If only 1 port exposed, use that */}}
{{ if eq $addrLen 1 }}
{{ with $address := index $value.Addresses 0 }}
# {{$value.Name}}
server {{ $address.IP }}:{{ $address.Port }};
{{ end }}
{{/* If a VIRTUAL_NETWORK is specified use use its IP */}}
{{ else if $value.Env.VIRTUAL_NETWORK }}
{{ range $i, $network := $value.Networks }}
{{ if eq $network.Name $value.Env.VIRTUAL_NETWORK }}
# Container: {{$value.Name}}@{{$network.Name}}
server {{ $network.IP }}:{{ $value.Env.VIRTUAL_PORT }};
{{ end }}
{{ end }}
{{/* If more than one port exposed, use the one matching VIRTUAL_PORT env var */}}
{{ else if $value.Env.VIRTUAL_PORT }}
{{ range $i, $address := $value.Addresses }}
{{ if eq $address.Port $value.Env.VIRTUAL_PORT }}
# {{$value.Name}}
server {{ $address.IP }}:{{ $address.Port }};
{{ end }}
{{ end }}
{{/* Else default to standard web port 80 */}}
{{ else }}
{{ range $i, $address := $value.Addresses }}
{{ if eq $address.Port "80" }}
# {{$value.Name}}
server {{ $address.IP }}:{{ $address.Port }};
{{ end }}
{{ end }}
{{ end }}
{{ end }}
}
{{ $default_host := or ($.Env.DEFAULT_HOST) "" }}
{{ $default_server := index (dict $host "" $default_host "default_server") $host }}
{{/* Get the VIRTUAL_PROTO defined by containers w/ the same vhost, falling back to "http" */}}
{{ $proto := or (first (groupByKeys $containers "Env.VIRTUAL_PROTO")) "http" }}
{{/* Get the first cert name defined by containers w/ the same vhost */}}
{{ $certName := (first (groupByKeys $containers "Env.CERT_NAME")) }}
{{/* Get the best matching cert by name for the vhost. */}}
{{ $vhostCert := (closest (dir "/etc/nginx/certs") (printf "%s.crt" $host))}}
{{/* vhostCert is actually a filename so remove any suffixes since they are added later */}}
{{ $vhostCert := replace $vhostCert ".crt" "" -1 }}
{{ $vhostCert := replace $vhostCert ".key" "" -1 }}
{{/* Use the cert specifid on the container or fallback to the best vhost match */}}
{{ $cert := (coalesce $certName $vhostCert) }}
{{ if (and (ne $cert "") (exists (printf "/etc/nginx/certs/%s.crt" $cert)) (exists (printf "/etc/nginx/certs/%s.key" $cert))) }}
server {
server_name {{ $host }};
listen 80 {{ $default_server }};
access_log /var/log/nginx/access.log vhost;
return 301 https://$host$request_uri;
}
server {
server_name {{ $host }};
listen 443 ssl http2 {{ $default_server }};
access_log /var/log/nginx/access.log vhost;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA;
ssl_prefer_server_ciphers on;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
ssl_certificate /etc/nginx/certs/{{ (printf "%s.crt" $cert) }};
ssl_certificate_key /etc/nginx/certs/{{ (printf "%s.key" $cert) }};
{{ if (exists (printf "/etc/nginx/certs/%s.dhparam.pem" $cert)) }}
ssl_dhparam {{ printf "/etc/nginx/certs/%s.dhparam.pem" $cert }};
{{ end }}
add_header Strict-Transport-Security "max-age=31536000";
{{ if (exists (printf "/etc/nginx/vhost.d/%s" $host)) }}
include {{ printf "/etc/nginx/vhost.d/%s" $host }};
{{ else if (exists "/etc/nginx/vhost.d/default") }}
include /etc/nginx/vhost.d/default;
{{ end }}
location / {
proxy_pass {{ trim $proto }}://{{ trim $host }};
{{ if (exists (printf "/etc/nginx/htpasswd/%s" $host)) }}
auth_basic "Restricted {{ $host }}";
auth_basic_user_file {{ (printf "/etc/nginx/htpasswd/%s" $host) }};
{{ end }}
{{ if (exists (printf "/etc/nginx/vhost.d/%s_location" $host)) }}
include {{ printf "/etc/nginx/vhost.d/%s_location" $host}};
{{ else if (exists "/etc/nginx/vhost.d/default_location") }}
include /etc/nginx/vhost.d/default_location;
{{ end }}
}
}
{{ else }}
server {
server_name {{ $host }};
listen 80 {{ $default_server }};
access_log /var/log/nginx/access.log vhost;
{{ if (exists (printf "/etc/nginx/vhost.d/%s" $host)) }}
include {{ printf "/etc/nginx/vhost.d/%s" $host }};
{{ else if (exists "/etc/nginx/vhost.d/default") }}
include /etc/nginx/vhost.d/default;
{{ end }}
location /.well-known/acme-challenge {
root /usr/share/nginx/html/;
break;
}
location / {
proxy_pass {{ trim $proto }}://{{ trim $host }};
{{ if (exists (printf "/etc/nginx/htpasswd/%s" $host)) }}
auth_basic "Restricted {{ $host }}";
auth_basic_user_file {{ (printf "/etc/nginx/htpasswd/%s" $host) }};
{{ end }}
{{ if (exists (printf "/etc/nginx/vhost.d/%s_location" $host)) }}
include {{ printf "/etc/nginx/vhost.d/%s_location" $host}};
{{ else if (exists "/etc/nginx/vhost.d/default_location") }}
include /etc/nginx/vhost.d/default_location;
{{ end }}
}
}
{{ if (and (exists "/etc/nginx/certs/default.crt") (exists "/etc/nginx/certs/default.key")) }}
server {
server_name {{ $host }};
listen 443 ssl http2 {{ $default_server }};
access_log /var/log/nginx/access.log vhost;
return 503;
ssl_certificate /etc/nginx/certs/default.crt;
ssl_certificate_key /etc/nginx/certs/default.key;
}
{{ end }}
{{ end }}
{{ end }}
Until now, this setup has been working fine without SSL and with https ( simply adding a container for letsencrypt-nginx-proxy-companion in my docker-compose.yml).
As I'm no docker neither nginx expert, I'm sure there is a stupid mistake either in my docker-compose or in the nginx template. Any help would be greatly appreciated!
I am also having the same problem with a setup that had been working previously for months.
I tried adding the nginx-gen
and nginx-letsencrypt
containers to my existing nginx-proxy
network and that didn't help.
Neither did updating the nginx.tmpl
file.
I do not know the language in which the template file is written.
The template file iterates over the $containers
variable at line 120. That variable is defined earlier at line 111, along with the $host
variable. The value for $host
is correct. I don't know what groupByMulti
is or does, but I suspect that's the problem.
I do not know the language in which the template file is written.
It's Go : https://golang.org/pkg/text/template/
My first experience with this template package and I find it interesting so far!
Through some printf-style debugging, I found that $CurrentContainer.Networks
is not set, which is why the upstream is not populated. I cannot explore further at this time, but it would be nice to know why that variable has no value.
Most probably one of your proxied container isn't on the nginx-proxy network but that'll be hard to tell for sure without your docker run
commands or docker-compose.yaml
file :)
Oh and don't use the nginx.tml
present on this repo, it is extremely outdated.
The latest one is there : https://raw.githubusercontent.com/jwilder/nginx-proxy/master/nginx.tmpl
I'm definitely using the correct template file, from the jwilder/nginx-proxy
repository.
I'm pretty sure the right containers are on the right network, but to make sure, here's my docker-compose.yml
file:
version: "2"
services:
nginx:
image: nginx
container_name: nginx
ports:
- "80:80"
- "443:443"
volumes:
- "/etc/nginx/conf.d"
- "./volumes/proxy/vhost.d:/etc/nginx/vhost.d"
- "/usr/share/nginx/html"
- "./volumes/proxy/certs:/etc/nginx/certs:ro"
networks:
- proxy-tier
restart: unless-stopped
nginx-gen:
image: jwilder/docker-gen
container_name: nginx-gen
volumes:
- "/var/run/docker.sock:/tmp/docker.sock:ro"
- "./volumes/proxy/templates/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro"
volumes_from:
- nginx
entrypoint: /usr/local/bin/docker-gen -notify-sighup nginx -watch -only-exposed -wait 5s:30s /etc
/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
restart: unless-stopped
letsencrypt-nginx-proxy-companion:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
volumes_from:
- nginx
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./volumes/proxy/certs:/etc/nginx/certs:rw"
environment:
- NGINX_DOCKER_GEN_CONTAINER=nginx-gen
- NGINX_PROXY_CONTAINER=nginx
restart: unless-stopped
server:
image: rancher/server:stable
container_name: rancher-server
volumes:
- "./volumes/server/rancher_prod:/var/lib/mysql"
environment:
- VIRTUAL_HOST=rancher-server.twilley.org
- VIRTUAL_NETWORK=nginx-proxy
- VIRTUAL_PORT=8080
- LETSENCRYPT_HOST=rancher-server.twilley.org
- [email protected]
networks:
- proxy-tier
restart: unless-stopped
registry:
image: registry:2
container_name: registry
volumes:
- "./volumes/registry/auth:/auth"
- "./volumes/registry/config:/etc/docker/registry"
- "./volumes/registry/contents:/var/lib/registry"
environment:
- REGISTRY_AUTH=htpasswd
- REGISTRY_AUTH_HTPASSWD_REALM=Registry
- REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd
- VIRTUAL_HOST=registry.twilley.org
- VIRTUAL_NETWORK=nginx-proxy
- VIRTUAL_PORT=5000
- LETSENCRYPT_HOST=registry.twilley.org
- [email protected]
networks:
- proxy-tier
restart: unless-stopped
networks:
proxy-tier:
external:
name: nginx-proxy
Do you know how the $CurrentContainer.Networks
variable is set?
docker-gen
gets those infos through the docker api with this function.
Have you tried removing -only-exposed
from the docker-gen
container's entrypoint ?
I case like yours were every container should be on the network I prefer to use
networks:
default:
external:
name: nginx-proxy
so that I don't have to add each individual container to the network. I never used the VIRTUAL_NETWORK
env variable, I wasn't even aware it existed.
You are definitely not the first to have issues with docker-gen
and rancher
.
I'm finally up, but I had to do all three:
- remove
VIRTUAL_NETWORK
- remove
-only-exposed
- use
default
Good times. Thanks for the help!
Just for anyone else stuck here, none of the advice works if using docker-compose -f my-file.yml run service etc
even if specifying all the correct options in compose. I think some options in the docker-compose file are ignored by docker-compose run
.