enhancements
enhancements copied to clipboard
Changing kubernetes/kubernetes default branch name to `main`
Enhancement Description
- One-line enhancement description (can be used as a release note): Changing kubernetes/kubernetes default branch name to
main
- Kubernetes Enhancement Proposal:
- Discussion Link:
- https://groups.google.com/a/kubernetes.io/g/steering/c/8fy8omuKdpM
- https://github.com/kubernetes/org/issues/2222
- Primary contact(s) (assignee):
- @justaugustus
- @cpanato
- Responsible SIGs: @kubernetes/sig-release-admins @kubernetes/wg-naming
- Enhancement target (which target equals to which milestone):
- Alpha release target (x.y): v1.24
- Beta release target (x.y): N/A
- Stable release target (x.y): N/A
- [ ] Alpha
- [ ] KEP (
k/enhancements
) update PR(s): - [ ] Code (
k/k
) update PR(s): - [ ] Docs (
k/website
) update PR(s):
- [ ] KEP (
Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.
/assign @justaugustus @cpanato /sig release /area release-eng /wg naming /milestone v1.24
What's the downstream impact of tools and processes and developers using the master branch? Will GitHub redirect those automatically or will they all need to be modified?
when we make the branch rename, all PRs that point to the master
branch will be updated automatically to the main
branch
for the local dev the users will need to update the branch to point to the new one, GitHub have a page to explain in how to do that https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-branches-in-your-repository/renaming-a-branch#updating-a-local-clone-after-a-branch-name-changes
what about all CI flows, sync flows, and downstream consumers of the kubernetes/kubernetes repo?
ci/sync flows on our side we will make the changes and we can monitor issues
for downstream consumers will be more tricky, we will communicate the change but not sure how to deal with that
in the GitHub UI when you access the fork GH notifies the branch changed in the upstream, so users can notice that
for downstream consumers will be more tricky, we will communicate the change but not sure how to deal with that
It's important to understand the scope of that impact for the highest-use repo we have. Is there a way to quantify or survey how much would break on this rename?
I don't know how we can do that, maybe we can send an announcement to our mailing list and spread the word via social media? Also, GitHub give some alerting https://github.com/github/renaming
- Show a notice to repository contributors, maintainers, and admins on the repository homepage with instructions to update local copies of the repository
- Show a notice to contributors who git push to the old branch
- Redirect web requests for the old branch name to the new branch name
- Return a "Moved Permanently" response in API requests for the old branch name
Is there a way to quantify or survey how much would break on this rename?
https://github.com/kubernetes/kubernetes/graphs/traffic gives some idea I guess (the git clones metrics), but automation vs manual clones is not clear, and not to which branch etc...
Renaming breaks git workflows that explicitly use master
. It's possible to avoid this and detect the default branch but doing so is a bit clunky and not widespread.
For Kubernetes most of our CI jobs are configured such that this requires no changes, because we do the following:
- presubmit and postsubmit configurations exclude release branches, or target a specific release branch, rather than targeting master
- periodic jobs largely consume binary builds from the build job, via GCS, without talking to GitHub at all
A handful of periodic jobs will need upating.
Downstream consumers will need to switch over themselves, but most downstreams are likely consuming release branches / tagged releases, for folks consuming HEAD of master I'd expect most of them to be in our developer mailinglists.
For workflows other than git automation, (inbound web links etc.), github will do a redirect from the renamed branch to the new name anyhow, so nothing should really break there. For manual clones this will just be the new default.
For local existing git clones, you just need to get in the habit of referencing main instead of master.
Shell git aliases can be updated to use one of the detection tricks (we can email out guidance, IIRC @cblecker shared a robust one some time back).
EDIT: we actually provide this in the rename issue repo template https://github.com/kubernetes/kubernetes/issues/105601, https://www.kubernetes.dev/resources/rename/#just-before-rename
Git workflows also could avoid explicitly naming the default branch since a simple git clone
will clone and checkout the default branch, we should really update to do this ourselves https://github.com/kubernetes/test-infra/issues/20667 & https://github.com/kubernetes/test-infra/issues/20672
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
something else to consider: staging repos https://github.com/kubernetes/kubernetes/pull/111980
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/lifecycle active
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
What's holding this up at this point? Should we just rip off the band-aid (with announcements and messaging similar to the switch from k8s.gcr.io to registry.k8s.io)?
I think we've all done a ton of these now and it's really not the end of the world...
What's holding this up at this point? Should we just rip off the band-aid (with announcements and messaging similar to the switch from k8s.gcr.io to registry.k8s.io)?
Bandwidth. Nobody has fleshed out a plan => sent out notices etc. The previous proposal wasn't widely communicated and didn't include discussion with involved parties e.g. SIG Testing. An updated proposal needs discussing with the relevant SIGs.
I think we've all done a ton of these now and it's really not the end of the world...
Nobody said it was. But we do have a LOT of things pointed at this repo, particularly CI jobs galore. There needs to be a little more coordination than our smaller repos (and those are still not done ...). It's doable, but everyone maintaining the project has a lot to deal with, so issues without an active champion fall on the ever growing backlog.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten still planned AFAIK
@neolit123 yes, thanks