caddy
caddy copied to clipboard
Feature Request: Present self-signed certificate until publicly-trusted cert is available
Hello
First off, thank you for this awesome project! As Whitestrake explained here, http-01 doesn't work for the first time behind CloudFlare. This puts us in the unfortunate position of having to accept downtime when migrating a website to our servers.
I think this could be prevented by having an option to use TLS internal as a fallback. That would mean that:
- http-01 would work through CloudFlare with Full Encryption (without "strict") without manual intervention
- no downtime would be caused while the certificate is not issued (or if renewal ever failed for whatever reason) - because CloudFlare could still terminate TLS as usual.
Of course this option should probably not be enabled by default as some users might prefer a hard fail when a cert is missing.
Would you consider to implement this feature? If not - would a PR be welcome?
You can already do this and I highly recommend it in your case. Just add an internal issuer to your automation policies. I thought I had an example in a wiki or docs or something, but in the original PR it's kind of here: https://github.com/caddyserver/caddy/pull/3862
tls {
issuer zerossl
issuer acme
issuer internal
}
Something like that. Or if you're using the JSON, just add an internal issuer to the end of your list of issuers: https://caddyserver.com/docs/json/apps/tls/automation/policies/
Related: https://caddy.community/t/using-caddy-to-keep-certificates-renewed/7525?u=matt
@mholt Thanks for the tip, however I don't think that really solves the problem (at least in my tests it didn't): To prevent downtime the fallback should not be used if the other issuers fail, but rather before they succeed. Did I miss something?
Oh, I misunderstood then. I updated the issue title to better express what I understand you're asking for now.
Hi, are we expecting this feature anytime soon?
@manigarg31 we're not currently working on it. Please explain why you'd like this!
I can work on it this year probably; but yeah, the more details we have about requirements and use cases, the better.
I can work on it this year probably; but yeah, the more details we have about requirements and use cases, the better.
We have a use case where we cannot provide certificates for all the websites since it will overload the certificate authority so we whitelist the domains for which we need the certificates. But we cannot block rest of the sites we still have to serve those with test certificate which will give warning if user want to proceed they can go ahead.
Yes, this will be good to handle cloudflare, also to handle https over ip address with a generic caddy configuration that does not need to specify each ip address with TLS internal and also when certificate authority limits are reached on subdomains or in the timing limits.
Will be cool also to have a Dynamic way to specify It via the Ask endpoint with on demand tls, the endpoint response could have some information about to which issuer list order etc to use, in such way one can split issuer for a set of domains or keep selfsigned until required some domains or White list dynamically via Ask endpoint if certificate authority limits are reached.
Also will be cool to specify in caddy configuration that in case the ask endpoint Is not reachable or available a fall back on selfsigned certs, until the endpoint recovers.
@mholt @francislavoie
One thing I want to clarify:
http-01 behind Cloudflare does work just fine.
Cloudflare will proxy http://example.com/.well-known/acme-challenge/example as http:// to :80 on the configured upstream, not terminating tls, unlike any other uri.
This is specific to /.well-known/acme-challenge/* and, as far as I can tell, happens exclusively with /.well-known/acme-challenge/*.
Even if one enabled always_use_https = true ("Always Use HTTPS ") or set ssl = "strict" "Full (Strict)" in the zone settings.
Also will be cool to specify in caddy configuration that in case the ask endpoint Is not reachable or available a fall back on selfsigned certs, until the endpoint recovers.
That's dangerous. What this would mean is that an attacker could cause your server to issue an infinite amount of certificates, filling up your storage, until it can't anymore. The attacker would just need to point a wildcard DNS record to your domain, then make requests for infinite subdomains to your server. That's one of the main purposes of ask, to mitigate that kind of attack. It should reject by default, otherwise that attack vector is open.
Will be cool also to have a Dynamic way to specify It via the Ask endpoint with on demand tls, the endpoint response could have some information about to which issuer list order etc to use
Hmm, that's a bit awkward. Issuers don't have any kind of identifier under the hood, it's just an array of issuer objects. So there's no way to actually tell Caddy after the config is loaded which way to order them. Unless we do add some identifier field on issuers. But that's not a precedent we have right now, really.
It would also mean having to specify a schema for the response returned from the ask endpoint. Right now, it's intentionally free-form in that we only read the status code. That means the upstream endpoint can respond however it wants (whatever's most convenient for your framework) and it should just work.
We have a use case where we cannot provide certificates for all the websites since it will overload the certificate authority
How many domains are you planning to manage? Caddy's rate limit is set to 10 per 10 seconds. See https://caddyserver.com/docs/automatic-https#errors. Unless you're serving hundreds of thousands or millions, I don't see that as being a problem.
@francislavoie Good points, but what if the self-signed cert was discarded after being used for that handshake?
This might be unacceptably slow, however... unless maybe we keep a single ephemeral key in memory.
Sure, but then you're trading storage for CPU cycles. The attacker could exhaust CPU resources from generating keys and issuing certs, instead of exhausting storage space. :man_shrugging:
That's why I'm thinking maybe we could keep a single ephemeral key in memory. The cert is pretty cheap after that.
Also will be cool to specify in caddy configuration that in case the ask endpoint Is not reachable or available a fall back on selfsigned certs, until the endpoint recovers.
That's dangerous. What this would mean is that an attacker could cause your server to issue an infinite amount of certificates, filling up your storage, until it can't anymore. The attacker would just need to point a wildcard DNS record to your domain, then make requests for infinite subdomains to your server. That's one of the main purposes of
ask, to mitigate that kind of attack. It should reject by default, otherwise that attack vector is open.Will be cool also to have a Dynamic way to specify It via the Ask endpoint with on demand tls, the endpoint response could have some information about to which issuer list order etc to use
Hmm, that's a bit awkward. Issuers don't have any kind of identifier under the hood, it's just an array of issuer objects. So there's no way to actually tell Caddy after the config is loaded which way to order them. Unless we do add some identifier field on issuers. But that's not a precedent we have right now, really.
It would also mean having to specify a schema for the response returned from the ask endpoint. Right now, it's intentionally free-form in that we only read the status code. That means the upstream endpoint can respond however it wants (whatever's most convenient for your framework) and it should just work.
We have a use case where we cannot provide certificates for all the websites since it will overload the certificate authority
How many domains are you planning to manage? Caddy's rate limit is set to 10 per 10 seconds. See https://caddyserver.com/docs/automatic-https#errors. Unless you're serving hundreds of thousands or millions, I don't see that as being a problem.
Here we are looking for hunderds of thousands of domains on regular basis.
issuer zerossl issuer acme issuer internal
@mholt this doesnt work under https:// block seeing error -> automation policy from site block is also default/catch-all policy because of key without hostname, and the two are in conflict: r{(*caddytls.ACMEIssuer)(0xc0001168c0), (*caddytls.ZeroSSLIssuer)(0xc00032a7b0), (*caddytls.InternalIssuer)(0xc00032a840)} != []certmagic.Issuer{(*caddytls>
Also will be cool to specify in caddy configuration that in case the ask endpoint Is not reachable or available a fall back on selfsigned certs, until the endpoint recovers.
That's dangerous. What this would mean is that an attacker could cause your server to issue an infinite amount of certificates, filling up your storage, until it can't anymore. The attacker would just need to point a wildcard DNS record to your domain, then make requests for infinite subdomains to your server. That's one of the main purposes of
ask, to mitigate that kind of attack. It should reject by default, otherwise that attack vector is open.Will be cool also to have a Dynamic way to specify It via the Ask endpoint with on demand tls, the endpoint response could have some information about to which issuer list order etc to use
Hmm, that's a bit awkward. Issuers don't have any kind of identifier under the hood, it's just an array of issuer objects. So there's no way to actually tell Caddy after the config is loaded which way to order them. Unless we do add some identifier field on issuers. But that's not a precedent we have right now, really.
It would also mean having to specify a schema for the response returned from the ask endpoint. Right now, it's intentionally free-form in that we only read the status code. That means the upstream endpoint can respond however it wants (whatever's most convenient for your framework) and it should just work.
We have a use case where we cannot provide certificates for all the websites since it will overload the certificate authority
How many domains are you planning to manage? Caddy's rate limit is set to 10 per 10 seconds. See https://caddyserver.com/docs/automatic-https#errors. Unless you're serving hundreds of thousands or millions, I don't see that as being a problem.
@francislavoie We are already using this arrangement in nginx with letsencrypt since rate limit is already enforced through domain validation. If user is not found in redis database internally kept self signed static certificate is used. It would have been risky in case ask url is not used in caddy. For overwhelming requests there is always a ratelimit and fail2ban .
@whizzygeeks You already opened a new issue to discuss this: https://github.com/caddyserver/caddy/issues/5627 -- please don't repeat and cause extra work.
Hi, I am interested in this feature. I'd really like Caddy to not generate a Let’sEncrypt/ZeroSSL certificate until the domain DNS are pointing to the host IP. In the meantime, caddy could have a self-signed certificate just as stated. Caddy could run every 5s or so a command like ips, err := net.LookupIP("example.com") in Go to check that the pointing IP is the same one as in the host, maybe something should be needed to handle CNAME records. Once it is the same as the host, it could trigger the certificate generation. Until this does not happen, an internal self-signed certificate is used.
This would be useful when I am waiting for a customer to point their domain's DNS to my server (for a domain I don’t control) and I can’t be waiting for them to notify me to trigger the new domain generation. I don’t want to leave the ssl generation as is right now, as it will be exponentially growing in wait-time, this means that if the other person points it a day later, it could face minutes or hours of downtime until it points to the domain and the generation succeeds. If this was possible, I would be able to leave it and when the DNS points to the IP, caddy would automatically trigger the SSL certificate generation, making things much easier.
Also, on the other hand, this behavior would save unnecessary Let'sEncrypt and ZeroSSL requests.
My man @hialvaro , you actually want On-Demand TLS.