website
website copied to clipboard
Invalid TLS certificate served for v1-{5,6,7,8}.docs.kubernetes.io etc
This is a...
- [ ] Feature Request
- [x] Bug Report
Problem: https://v1-8.docs.kubernetes.io/docs/reference/ serves an invalid TLS certificate:
This server could not prove that it is v1-8.docs.kubernetes.io; its security certificate is from *.netlify.com. This may be caused by a misconfiguration or an attacker intercepting your connection.
Proposed Solution: Use a valid TLS certificate for https://v1-8.docs.kubernetes.io/.
Page to Update:
- https://v1-8.docs.kubernetes.io
- https://v1-7.docs.kubernetes.io
- https://v1-6.docs.kubernetes.io
- https://v1-5.docs.kubernetes.io
They are all linked from https://v1-9.docs.kubernetes.io/docs/reference/
(https://v1-9.docs.kubernetes.io is OK).
Update (@zacharysarah, 30 July 2019): Given subsequent releases since this issue was opened, versions 1.9 and 1.10 have also aged out and return invalid TLS certificate warnings.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Update: https://v1-9.docs.kubernetes.io/docs/reference/ is showing an error now too.
Docs for v1.10 are at https://v1-10.docs.kubernetes.io/ and don't link to earlier versions AFAICT.
The outcome I'd like: someone who visits https://v1-9.docs.kubernetes.io/docs/reference/ can negotiate TLS, talk HTTP over it, and ask for that page. The webserver sends back an attractive 410 Gone response with Kubernetes project styling and a hyperlink to the available documentation versions.
@sftim That would be a nice outcome. I’ll reach out to netlify and see what it would take to serve a 410.
I was thinking about a second Hugo site. Reason being is that the 410 page content I'm imagining needs to change each time there's a Kubernetes release, to hyperlink to the new supported version.
I have this in mind:
- use images and stylesheets from the main Kubernetes website
- possibly have a
robots.txt - map every other page on the old sites to a 410 response
- Hugo publishes the content for the body of the 410
@sftim
I was thinking about a second Hugo site.
It sounds like you may also mean a second repo with source for that site. It's a possible solution, but maybe not optimal. We would probably get pushback from SIG Architecture about whether we've adequately considered solutions in the existing repo before creating a new one.
@sftim 👋 Here's what I heard back from netlify:
We don't have any feature like that, Zach - the HTTP 410 status is not something we support natively, and we further don't keep sites or certificates around for deleted sites, so what you're seeing is expected based on how our service works.
As far as solutions go, I have a couple thoughts:
1. I guess you could set up DNS for those hostnames to point to some site you create on another service that can return that status (it isn't even available on our service except as the return from a function (see https://www.netlify.com/docs/functions).
2. You could alternatively configure us to proxy to that service (see https://www.netlify.com/docs/redirects/#proxying) but you'd have to create a separate site here with that hostname just for that purpose, and let us get a new SSL certificate for it, so that's not really much of a win vs my first workflow suggestion around creating a site on another service.
We could more easily return an HTTP 404 without a certificate error. To do that, you could add all those names to some site with no/little content and/or a custom 404 page and let us get a new certificate for the names as it relates to that site (which should happen automatically once you add the name to a different site)
@lucperkins 👋 The netlify response may interest you, too. ☝️
/priority backlog Is that OK?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten The problem is still there, altough only the 4 more recent documentation versions are linked from the current doc home page, and each has a valid TLS certificate. Old documentation is still useful: people sometime run old versions.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@sftim: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
It still happens.
/remove-lifecycle rotten
We could publish a new Netlify site with only a 404 page (copied from the live docs?)
That site could have aliases for v1-{0,1,2,3,4,5,6,7,8,9,10,11,12,13,14}.docs.kubernetes.io
- every time we do a release, [manually] add an alias for the (unmaintained, old) documentation that we drop
- If we hit a Netlify limit on alias count, start a new site and continue adding aliases to the new one
It's a bit more toil per release but it's good enough.
We can't have a 410 response unless we persuade Netlify to start supporting that. Which they might.
/assign
/remove-priority backlog /priority important-longterm
/triage accepted
We could publish a new Netlify site with only a 404 page ... That site could have aliases ...
That Netlify has unlimited site support bodes well that we’d probably not see a cap for this (though, it’d be good to ask to confirm). It doesn’t seem like it would be much extra work per release, we’d have to create a new alias for the bumped version, but no more than that—the rest should still be pointing at the primary site. My question would be about automating the alias creation—I’m still fairly new to Netlify, but it would be interesting if we write some code to automate this step for us.
Happy to do this manually n times a year and worry about automating it later.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale