Recent change caused Environment variables to no longer work
Today when I updated my dockergen container (latest tag), it no longer worked, seemingly not finding the environment variables I have defined anymore.
When I reverted to 0.14 everything worked fine again, so it seems like in image 9209330fbea3 there's a bug that it doesn't read from environment anymore.
I saw these errors that seem to indicate that it's reading the wrong maps for the wrong variables perhaps, but perhaps I'm wrong:
nginx-proxy-gen | 2025/06/18 16:15:16 unable to find key from the path expression VIRTUAL_HOST in map map[PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin]
nginx-proxy-gen | 2025/06/18 16:15:16 unable to find key from the path expression VIRTUAL_HOST in map map[ACMESH_VERSION:3.1.1 COMPANION_VERSION:v2.6.0-4-g1fd6385 DHPARAM_GENERATION:false DOCKER_HOST:unix:///var/run/docker.sock PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/app]
[...]
map[LETSENCRYPT_HOST:bashquotes.dragonhive.net NODE_VERSION:18.20.8 PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin VIRTUAL_HOST:bashquotes.dragonhive.net VIRTUAL_PORT:3000 VIRTUAL_PROTO:http YARN_VERSION:1.22.22]
nginx-proxy-gen | 2025/06/18 16:15:16 unable to find key from the path expression CERT_NAME in map map[LETSENCRYPT_HOST:bashquotes.dragonhive.net NODE_VERSION:18.20.8 PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin VIRTUAL_HOST:bashquotes.dragonhive.net VIRTUAL_PORT:3000 VIRTUAL_PROTO:http YARN_VERSION:1.22.22]
Note the VIRTUAL_HOST is present in the maps of the later errors when it's looking for different variables (like CERT_NAME), but not in the maps when it's looking for the VIRTUAL_HOST variable, so perhaps it's grabbing the wrong ones?
I know about the work to move to labels instead of environment variables but this seemed unrelated (and I also recall it being said they would still support the 'legacy' environment variables for the time being)
Likely #682 @buchdag.
@aaccioly agreed 👍
@xarinatan could your pull the image nginxproxy/docker-gen:revert-682 and validate that it fix the issue ?
@aaccioly agreed 👍
@xarinatan could your pull the image
nginxproxy/docker-gen:revert-682and validate that it fix the issue ?
Sorry for the delay, I've been busy and forgot to get back to this. I just gave it a quick test in production (because I'm 🤠 like that) and it seems to work 👍 nice quick find! @aaccioly @buchdag
@xarinatan I'm actually unable to reproduce the bug on my side, and the nginx-proxy test suite seem to pass just fine with a docker-gen version that include the patch from #682, the one that was supposed to cause the bug 🤷
- could you detail the issues you experienced beside the log warnings ?
- could you provide a reproducible example ?
So far I've seen that #682 mistakenly generate a LOT of undesirable logs, but I did not manage to make it break anything.
I've tested the latest tag again and it seems like it works again. Previously it definitely didn't work, the frontend was fully offline while I had those error messages in the log I posted in the original report. I'm not seeing those anymore either.. so maybe something was fixed, or maybe it was a temporary glitch on my side somehow (I did reboot the server at least once since for unrelated updates).
So I guess it can be closed for now, I'll reopen it if the problem happens again (or if anyone else runs into it, feel free to reopen it too).
@xarinatan the issue is that I had to revert a fix for another bug, and I was unable to reproduce the problems you encountered.
In all the tests I've done, I had tons of garbage log messages like the one you mentioned but no actual breakage.
The fix I had to revert is still needed, so I'm gonna need your help to reproduce the problem you had, or confirm that another new version with the aforementioned fix still work for you.
@buchdag, it was failing for me as well. In addition to the logs, with a 3-container setup, nginx container was getting stuck in a restart loop. Could you cut a tag "reverting your revert" and maybe trim the excess logs? I'm happy to test it against my setup and, if it fails, try to provide you with a reproducible example.
@aaccioly could you test with the nginxproxy/docker-gen:fix-679 image ?
It's been built from the fix/679-v2 branch which:
- re-apply the fix
- remove the excess logs
- add some tests that seemed pertinent for this issue but did not yield unexpected results
Thank you for your help 👍
Ok, so I just ran a quick test with https://github.com/aaccioly-open-source/haven. Other than a slightly longer startup sequence, likely due to extra interactions between acme-companion and docker-gen (which is 99% likely a coincidence related to certificate renewal shenanigans)—everything is working and stable with nginxproxy/docker-gen:fix-679. So, on my end, feel free to merge.
docker-gen logs
2025/07/13 17:35:46 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
2025/07/13 17:35:46 Watching docker events
2025/07/13 17:35:46 Received signal: hangup
2025/07/13 17:35:46 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
2025/07/13 17:35:46 Received signal: hangup
2025/07/13 17:35:46 Generated '/etc/nginx/conf.d/default.conf' from 4 containers
2025/07/13 17:35:46 Sending container 'nginx-proxy' signal '1'
2025/07/13 17:35:46 Received signal: hangup
2025/07/13 17:35:46 Received signal: hangup
2025/07/13 17:35:46 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
2025/07/13 17:35:46 Received signal: hangup
2025/07/13 17:35:46 Received signal: hangup
2025/07/13 17:35:46 Generated '/etc/nginx/conf.d/default.conf' from 4 containers
2025/07/13 17:35:46 Sending container 'nginx-proxy' signal '1'
2025/07/13 17:35:46 Received signal: hangup
2025/07/13 17:35:46 Received signal: hangup
2025/07/13 17:35:46 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
acme-companion logs
Info: running acme-companion version v2.6.0-4-g1fd6385
Info: 4096 bits RFC7919 Diffie-Hellman group found, generation skipped.
Reloading nginx docker-gen (using separate container b22b41f41036d071ffe42f821489bd6c6ad6c96d87443d89ac70ddf6c4606103)...
Reloading nginx (using separate container 104de59a08a3032b9b435a6069572fe665f497166be6c85a4532ff7269f261f8)...
Warning: /app/letsencrypt_service_data not found, skipping data from containers.
2025/07/13 18:35:46 Generated '/app/letsencrypt_service_data' from 4 containers
2025/07/13 18:35:46 Running '/app/signal_le_service'
Reloading nginx docker-gen (using separate container b22b41f41036d071ffe42f821489bd6c6ad6c96d87443d89ac70ddf6c4606103)...
2025/07/13 18:35:46 Watching docker events
Creating/renewal haven.accioly.social certificates... (mydomain.com)
2025/07/13 18:35:46 Contents of /app/letsencrypt_service_data did not change. Skipping notification '/app/signal_le_service'
[Sun Jul 13 18:35:46 BST 2025] Domains not changed.
[Sun Jul 13 18:35:46 BST 2025] Skipping. Next renewal time is: 2025-08-05T01:04:33Z
[Sun Jul 13 18:35:46 BST 2025] Add '--force' to force renewal.
Reloading nginx docker-gen (using separate container b22b41f41036d071ffe42f821489bd6c6ad6c96d87443d89ac70ddf6c4606103)...
Reloading nginx (using separate container 104de59a08a3032b9b435a6069572fe665f497166be6c85a4532ff7269f261f8)...
Reloading nginx docker-gen (using separate container b22b41f41036d071ffe42f821489bd6c6ad6c96d87443d89ac70ddf6c4606103)...
Reloading nginx (using separate container 104de59a08a3032b9b435a6069572fe665f497166be6c85a4532ff7269f261f8)...
Sleep for 3600s