node-feature-discovery
node-feature-discovery copied to clipboard
Default Branch Migration
This issue is a tracker of needed tasks to migrate the default branch of the repo from master to main.
Following https://www.kubernetes.dev/resources/rename/ guidance
Anytime
These changes are non-disruptive and can be made anytime before renaming the branch.
-
[x] If a presubmit or postsubmit prowjob triggers on the
masterbranch (branchesfield of the prowjob), add themainbranch to the list (see [kubernetes/test-infra#20665] for an example). -
[x] If the [
milestone_applier] prow config references themasterbranch, add themainbranch to the config (see [kubernetes/test-infra#20675] for an example). -
[x] If the [
branch_protection] prow config references themasterbranch, add themainbranch to the config.
Just before rename
These changes are disruptive and should be made just before renaming the branch.
-
[ ] For periodic prowjobs, or any prowjob that mentions the
masterbranch inbase_ref, update them to themainbranch. Ensure that these changes happen in lock-step with the branch rename (jobs triggered in between landing these changes and renaming the branch will fail).- For bootstrap-based jobs, ensure the branch is explicitly specified,
e.g.
kubernetes/foo=main. [kubernetes/test-infra#20667] may eventually allow for non-disruptive changes. - For pod-utils based jobs, ensure the branch is explicitly specified,
e.g.
base_ref: main. [kubernetes/test-infra#20672] may eventually allow for non-disruptive changes.
- For bootstrap-based jobs, ensure the branch is explicitly specified,
e.g.
-
[ ] If a prowjob mentions
masterin its name, rename the job to not include the branch name, e.g.pull-repo-verify-master->pull-repo-verify. [status-reconciler] should automatically migrate PR status contexts to the new job name, and retrigger accordingly, but we have anecdotally found it sometimes misses changes.- NOTE: our infrastructure doesn't understand the concept of job renames, so from the perspective of e.g. https://testgrid.k8s.io the job will appear to have lost history and start from scratch.
-
[ ] If a prowjob calls scripts or code in your repo that explicitly reference
master, update all references to usemain, or auto-detect the remote branch- e.g. using git to auto-detect
# for existing clones, update their view of the remote git fetch origin git remote set-head origin -a # for new clones, or those updated as above, this prints "main" post-rename echo $(git symbolic-ref refs/remotes/origin/HEAD)- e.g. using github's api to auto-detect
# gh is https://github.com/cli/cli, this will print "main" post-rename gh api /repos/kubernetes-sigs/slack-infra | jq -r .default_branch -
[ ] If the repo has netlify configured for it, ask a member of the GitHub Management Team to rename the
masterbranch tomainin the netlify site config. It can't be controlled through the netlify config in the repo.
Approval
- [ ] Once all non-disruptive tasks have been completed and disruptive tasks have been identified, assign the GitHub Management team ([@kubernetes/owners]) for approval.
Rename the default branch
- [ ] Rename the default branch from
mastertomainusing the GitHub UI by following the [official instructions].
Changes post-rename
After the default branch has been renamed to main, make the following
changes.
Note: There might be additional changes required that have not been covered in this checklist.
Prowjobs
- [ ] If a prowjob still references the
masterbranch in thebranchesfield, remove themasterbranch (see [kubernetes/test-infra#20669] for an example).
Prow config
-
[ ] If the [
milestone_applier] prow config references themasterbranch, remove it from the config. -
[ ] If the [
branch_protection] prow config references themasterbranch, remove it from the config.
Other
-
[ ] If any docs reference the
masterbranch, update tomain(URLs will be automatically redirected). -
[ ] Ensure that CI and PR tests work fine.
- If there are any outstanding PRs you can /approve to merge, do so to verify that presubmits and postsubmits work as expected
-
[ ] Trial the local development experience with a pre-rename clone.
- ensure [Github instructions to rename your local branch] work
- consider updating your fork's default remote branch name such that if you
have git autocompletion enabled, typing
ma<tab>will autocomplete tomain
-
[ ] Send a notice about the branch rename to your SIG's mailing list. Include the link to the [GitHub instructions to rename your local branch].
-
[ ] Update scripts/github/update-gh-pages.sh to handle main
-
[ ] Update docs/
- [ ] Update references to refer to main branch
- [ ] Update docs/_config.yml
Thanks @ArangoGutierrez for the extensive checklist. On top of my head, in addition to these, in NFD we need to
- [ ] Update
scripts/github/update-gh-pages.shto handlemain - [ ] Update
docs/- [ ] Update references to refer to main branch
- [ ] Update
docs/_config.yml
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten
/reopen
@ArangoGutierrez: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I can help with this task if nobody is working on it already @ArangoGutierrez @marquiz
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
I can help with this task if nobody is working on it already @ArangoGutierrez @marquiz
Hey @fmuyassarov let's work together on this
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.