enhancements
enhancements copied to clipboard
DualStack apiserver support
Dual-stack apiserver support
- One-line enhancement description (can be used as a release note): Make the apiserver service dual-stack in dual-stack clusters
- Kubernetes Enhancement Proposal: https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/2438-dual-stack-apiserver
- Discussion Link: Discussed in sig-net on Jan 21, then a bit on slack (https://kubernetes.slack.com/archives/CGF3A900N/p1611328345018100, including a second thread off a later comment)
- Primary contact (assignee): @danwinship
- Responsible SIGs: sig-network, sig-apimachinery
- Enhancement target (which target equals to which milestone):
- Alpha release target (x.y): 1.23(?)
- Beta release target (x.y): 1.25?
- Stable release target (x.y): 1.26?
- [ ] Alpha
- [ ] KEP (
k/enhancements
) update PR(s): - [ ] Code (
k/k
) update PR(s): - [ ] Docs (
k/website
) update PR(s): https://github.com/kubernetes/website/pull/32034
- [ ] KEP (
Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.
/sig network /sig api-machinery
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
This is still very much desired ;-)
/remove-lifecycle stale
/milestone v1.23
/stage beta
Hi @danwinship! 1.23 Enhancements team here. Just checking in as we approach enhancements freeze at 11:59pm PST on Thursday 09/09. Here's where this enhancement currently stands:
- [ ] Updated KEP file using the latest template has been merged into the k/enhancements repo.
- [X] ~~KEP status is marked as
implementable
~~ - [X] ~~KEP has a test plan section filled out.~~
- [X] ~~KEP has up to date graduation criteria.~~
- [ ] KEP has a production readiness review that has been completed and merged into k/enhancements.
For this one, looks like we'll need the kep.yaml updated to reflect the current stage and latest milestone. It also looks like you'll still need to complete a PRR.
Thanks!
@salaxander sorry, the initial description hadn't been updated in a while. This did not go alpha in 1.22 and thus is not scheduled to go beta in 1.23 (but should hopefully be alpha, and I think we are good there, because you don't need a completed PRR to go to alpha)
@danwinship sounds good! Then we're all good once the KEP updated merges
If you want to hit 1.23 you need a PRR soon. Should be pretty simple.
/stage alpha
Hi, 1.23 Enhancements Lead here 👋. With enhancements freeze now in effect, this enhancement has not met the criteria for the freeze and has been removed from the milestone.
As a reminder, the criteria for enhancements freeze is:
- KEP is merged into k/enhancements repo with up to date latest milestone and stage.
- KEP status is marked as
implementable
. - KEP has a test plan section filled out.
- KEP has up to date graduation criteria.
- KEP has a production readiness review for the correct stage that has been completed and merged into k/enhancements.
Feel free to file an exception to add this back to the release. If you plan to do so, please file this as early as possible.
Thanks! /milestone clear
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
KEP is provisional, please update to implementable
and assign a PRR approver (I'd suggest @johnbelamaric)
Hi @danwinship ! 1.24 Enhancements team here. Just checking in as we approach enhancements freeze on 18:00pm PT on Thursday Feb 3rd. This enhancements is targeting alpha
for 1.24.
Here’s where this enhancement currently stands:
- [X] Updated KEP file using the latest template has been merged into the k/enhancements repo -- this will be kep with
latest-milestone: 1.24
- [X] KEP status is marked as implementable for this release with
latest-milestone: 1.24
- [x] KEP has a test plan section filled out.
- [x] KEP has up to date graduation criteria.
- [x] KEP has a production readiness review that has been completed and merged into k/enhancements.
The status of this enhancement istracked
.
Thanks!
Hi @danwinship :wave: 1.24 Docs lead here.
This enhancement is marked as Needs Docs for the 1.24 release.
Please follow the steps detailed in the documentation to open a PR against the dev-1.24
branch in the k/website
repo. This PR can be just a placeholder at this time and must be created before Thursday, March 31st, 2022 @ 18:00 PDT.
Also, if needed take a look at Documenting for a release to familiarize yourself with the docs requirement for the release.
Thanks!
Hi @danwinship 1.24 Enhancements Team here,
With Code Freeze approaching on 18:00 PDT Tuesday March 29th 2022, the enhancement status is at risk
as there is no linked k/k PR. Kindly list them in this issue. Thanks!
(update) Are following PRs part of the KEP code implementation?
- https://github.com/kubernetes/kubernetes/pull/107872
- https://github.com/kubernetes/kubernetes/pull/107878
Hi @danwinship and @thockin :wave: 1.24 Release Comms team here.
We have an opt-in process for the feature blog delivery. If you would like to publish a feature blog for this issue in this cycle, then please opt in on this tracking sheet.
The deadline for submissions and the feature blog freeze is scheduled for 01:00 UTC Wednesday 23rd March 2022 / 18:00 PDT Tuesday 22nd March 2022. Other important dates for delivery and review are listed here: https://github.com/kubernetes/sig-release/tree/master/releases/release-1.24#timeline.
For reference, here is the blog for 1.23.
Please feel free to reach out any time to me or on the #release-comms channel with questions or comments.
Thanks!
Hi, 1.24 Enhancements Lead here 👋. With code freeze now in effect, this enhancement has not met the criteria for the freeze and has been removed from the milestone.
As a reminder, the criteria for code freeze is:
All PRs to the kubernetes/kubernetes repo have merged by the code freeze deadline Feel free to file an exception to add this back to the release. If you plan to do so, please file this as early as possible.
Thanks! /milestone clear
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
No progress for 1.26
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
(from the KEP)
Another possibility is that instead of cooperatively maintaining a single EndpointSlice (or pair of EndpointSlices), each apiserver would write out its own slice(s) containing only its own IP(s). Clients would then have to aggregate all of the slices together to get the full list of active IPs.
I like the idea of an EndpointSlice per API server and recommending that as the discovery mechanism. There can still be a reconciler making a best effort to update a single-stack Endpoints based on those EndpointSlices; legacy support is important.
The individual EndpointSlices then each have their own metadata, which allows things like (say) annotating one of the EndpointSlices to record self-observations, or labelling the endpoint slice with API server identity (cf https://kubernetes.io/docs/concepts/architecture/leases/#api-server-identity). It shouldn't be a scaling issue as few clusters have more than 9 API servers.
Another benefit, potentially:
If I (mis)configure two API servers to have the same IPv4 address, a mechanism with a single EndpointSlice per address family leads both API servers to conclude that their identity is reconciled. With an EndpointSlice per API server, the conflict allows all API servers in the cluster to spot the clash and report this via metrics.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale