Continued support for leader election transition logic
The release notes for v1.3.1 state:
In v1.3.1 leader elections will be done entirely using the Lease API and no longer using configmaps. v1.3.0 is a safe transition version, using v1.3.0 can automatically complete the merging of election locks, and then you can safely upgrade to v1.3.1.
From an operational perspective, it is rather impractical to enforce a rollout of v1.3.0 before upgrading to a more recent version. Also, a breaking change in a patch version does not comply with semantic versioning and is thus somewhat unexpected.
I would love to see continued support for the transition logic for at least two minor releases to allow for smooth updates. Multiple instances are probably the default setting for most HA clusters.
Please let me know if I completely misunderstood the statement. Maybe someone can clarify the situation then.
@stephan2012: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Normally yes, semantically we shouldn't introduce any breaking changes.
The reality is that we are currently in a stabilization project and feature request freeze period [1], and we are indeed doing some updates and some important fixes.
But at the same time, we also found that we need to release a new version and push out some bugfixes.
In previous community meetings we discussed cherry-picking or releasing from the main branch.
This release does affect some users, sorry, but we don't have much opportunity to release it at the moment. This will affect whether we can achieve our goals as expected.
1: https://groups.google.com/a/kubernetes.io/g/dev/c/rxtrKvT_Q8E
The reality is that we are currently in a stabilization project and feature request freeze period [1], and we are indeed doing some updates and some important fixes.
@tao12345666333, your statement stresses that the transition logic should be carried on for new releases. It is normal to pick the latest bugfix release from a minor release branch and don’t waste time with a transition release.
@stephan2012 there are multiple factors. Ideal situations have resources, opportunities and choices in more abundance. Just saying things are not ideal all the time and also saying that tremendous effort did go in. Deprecation of APIs in upstream K8S KEP, Number of Developers available, Timing and Process are some factors influencing the roadmap and delivery.
What I do not understand at the moment is why we cannot keep the transition logic for a while. That contributes to the goal of stabilization and feature freeze.
Non-authoritative guess of reason is that the implications are not feasible/viable choices/options to even consider, based on multiple factors. Reviewing several of this year's community meeting videos, posted on youtube, will give some kind of info on what is going on in the project.
/remove-kind bug /kind feature
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale