release
release copied to clipboard
packaging specs should be per-kubernetes version and in-tree (k/k)
Having these out of tree means that:
- the systemd specs are not available with kubernetes as a reference to people not using the deb/rpm packages (e.g. ~all of the upstream cluster provisioning tools) https://github.com/kubernetes/kubernetes/issues/88832
- we don't have an easy way to modify these with somewhat breaking changes without backporting them to previous releases, blocking fixes to "kubelet crashloops constantly until kubeadm runs" which is ... not good https://github.com/kubernetes/release/pull/1352
see: also https://github.com/kubernetes/release/issues/1636
I think we can leave the rapture stuff here, and even build tools, but the basic package specs should be versioned with Kubernetes.
If this is not acceptable to the krel crew, I at least want to move the systemd unit files back in-tree and have the packaging consume them in the same way it consumes the Kubernetes bits. We'll need some way to deal with things like potentially adding new files to package in new kubernetes verisons.
@kubernetes/release-engineering @neolit123
I will contribute to this, but I want to make sure there is no objection first. At the very least I think it would be much more reasonable to move the systemd files back to k/k.
Once that is done, I intend to ship a solution to crashlooping upstream as well. We're working on it ~downstream in https://github.com/kubernetes-sigs/kind/pull/2072 (the only downstream thing is the systemd specs, since we stopped being able to depend on k/k having them we have our own systemd files just like every other upstream kubernetes deployer ...)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
Like I have mentionion before I have no strong preference where the files are as long as they are versioned per k8s version. This would allow us to make changes in the latest version without affecting older versions.
/remove-lifecycle rotten
yeah, at the very least I think that is an important blocker to improving the kubeadm experience and it should be a trivial problem + solution.
I really think they should also be in-tree in Kubernetes though as well because it's the most straightforward solution to ensuring they're versioned with kubernetes (they can even get small fixes between patches!), and people not using the debians/rpms still need these files and it's not fun for them trying to figure out that they're buried deep in one of our 100s of repos. most projects with official systemd specs supply them alongside the tool they are used for imho.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/lifecycle frozen
For context: last discussed this was blocked on kubernetes package building eternally being moving from the old tool to the new tool which is still not ready, changes like this are being blocked in the meantime.
If we reach a state where packaging changes are accepted again, this is a very straightforward change (just move the files to k/k and use them from k/k when present) that shouldn't take much work. I think it would benefit the kubeadm ecosystem a lot in particular, as we can finally ship https://github.com/kubernetes/release/pull/1352 upstream if nothing else.