Unable to authenticate to dsm using http
When I run this container the first time in stage and production mode everything works fine. But when i run it again it can't replace the cert in dsm until I delete the /data/acme/account.conf file.
Error:
acme_1 | -----END CERTIFICATE-----
acme_1 | [Sun Jun 6 11:36:22 UTC 2021] Your cert is in /acme.sh/x/x.cer
acme_1 | [Sun Jun 6 11:36:22 UTC 2021] Your cert key is in /acme.sh/x/x.key
acme_1 | [Sun Jun 6 11:36:22 UTC 2021] The intermediate CA cert is in /acme.sh/x/ca.cer
acme_1 | [Sun Jun 6 11:36:22 UTC 2021] And the full chain certs is there: /acme.sh/x/fullchain.cer
acme_1 | [Sun Jun 6 11:36:22 UTC 2021] Logging into xxxxxx:5000
acme_1 | [Sun Jun 6 11:36:28 UTC 2021] Unable to authenticate to xxxxx:5000 using http.
acme_1 | [Sun Jun 6 11:36:28 UTC 2021] Check your username and password.
acme_1 | [Sun Jun 6 11:36:28 UTC 2021] Error deploy for xxxxx
acme_1 | [Sun Jun 6 11:36:28 UTC 2021] Deploy error.
Steps to reproduce without swarm mode: A:
- Testing Instructions
- 100% Success
- Set
FORCE_RENEW=truein.env - Run
docker-compose up - Error
B:
- Testing Instructions
- 100% Success
- Set
FORCE_RENEW=truein.env - Set
TARGET=productionin.env - Run
docker-compose up - Error
C:
- Testing Instructions
- Set
TARGET=productionin.env - Run
docker-compose up - 100% Success
- Change
DOMAIN=somethingelse.com - Run
docker-compose up - Error
Fix:
Delete /data/acme/account.conf file after 100% success.
That's odd, it looks like the acme.sh script does not pick up the Docker secrets (as staged environment variables) correctly. What happens if you run the following statement from within the container?
grep -vH --null '^#' /run/secrets/* | tr '\0' '=' | sed 's/^\/run\/secrets\///g'
Does it show the following credentials?
CF_Email=xxx
CF_Token=xxx
SYNO_Password=xxx
SYNO_Username=xxx
The secrets are working because it can create a new certificate and it works when /data/acme/account.conf is deleted.
account.conf:
AUTO_UPGRADE='1'
UPGRADE_HASH='8a08de56915db23cdc0a18de301556f7ce531881'
USER_PATH='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
Maybe the upgrade hash isn't valid after a redeploy. I don't know what this does.
The error seems to be connecting to Synology though? From your logs:
acme_1 | [Sun Jun 6 11:36:22 UTC 2021] Logging into xxxxxx:5000
acme_1 | [Sun Jun 6 11:36:28 UTC 2021] Unable to authenticate to xxxxx:5000 using http.
grep -vH --null '^#' /run/secrets/* | tr '\0' '=' | sed 's/^\/run\/secrets\///g'
This works
grep -vH --null '^#' /run/secrets/* | tr '\0' '=' | sed 's/^\/run\/secrets\///g'
[email protected]
CF_Token=Xkcu7xxxxNi9FXTeufhh8xxxxHCfKl_4gxxxxb
SYNO_Password=xxxxxxxxx
SYNO_Username=acme-cert
The error seems to be connecting to Synology though? From your logs:
Yes the script can't login to Synology with an existing /data/acme/account.conf file. Everything else works.
Something is strange. I don't get this issue with the new build develop branch.
docker-compose down
#switch to develop
docker-compose pull
docker-compose up
# SUCCESS
docker-compose down
#switch to 2.8.6
docker-compose up
#Described Error
docker-compose down
#switch to develop
docker-compose up
# SUCCESS
Images:
xxxxx@xxxxx:/volume1/acme/dsm-XPEnology-acme$ sudo docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
markdumay/synology-tls develop 26a96e707dd0 6 hours ago 66.5MB
markdumay/synology-tls 2.8.6 7e94887bc623 12 months ago 65.9MB
It's still strange, I'd expect for the ENV vars to take precedence over the account.conf file. Perhaps this has been revised in the acme.sh script itself. If I remember correctly, you can indeed safely remove the account.conf file, as all required credentials are available in the ENV instead. Then indeed adding an rm statement in the scripts could enforce this.
I pushed a new development image this morning (or actually, Docker Hub detected a change in the GH repo). The only notable change in the develop branch is the addition of --server letsencrypt. The only other thing I can imagine what's different is the installation of acme.sh in the Docker image? Do so you see a difference in both images when running acme.sh --version?