release icon indicating copy to clipboard operation
release copied to clipboard

packaging specs should be per-kubernetes version and in-tree (k/k)

Open BenTheElder opened this issue 4 years ago • 11 comments

Having these out of tree means that:

  • the systemd specs are not available with kubernetes as a reference to people not using the deb/rpm packages (e.g. ~all of the upstream cluster provisioning tools) https://github.com/kubernetes/kubernetes/issues/88832
  • we don't have an easy way to modify these with somewhat breaking changes without backporting them to previous releases, blocking fixes to "kubelet crashloops constantly until kubeadm runs" which is ... not good https://github.com/kubernetes/release/pull/1352

see: also https://github.com/kubernetes/release/issues/1636

I think we can leave the rapture stuff here, and even build tools, but the basic package specs should be versioned with Kubernetes.

If this is not acceptable to the krel crew, I at least want to move the systemd unit files back in-tree and have the packaging consume them in the same way it consumes the Kubernetes bits. We'll need some way to deal with things like potentially adding new files to package in new kubernetes verisons.

@kubernetes/release-engineering @neolit123

BenTheElder avatar Feb 13 '21 20:02 BenTheElder

I will contribute to this, but I want to make sure there is no objection first. At the very least I think it would be much more reasonable to move the systemd files back to k/k.

Once that is done, I intend to ship a solution to crashlooping upstream as well. We're working on it ~downstream in https://github.com/kubernetes-sigs/kind/pull/2072 (the only downstream thing is the systemd specs, since we stopped being able to depend on k/k having them we have our own systemd files just like every other upstream kubernetes deployer ...)

BenTheElder avatar Feb 13 '21 21:02 BenTheElder

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar May 14 '21 23:05 fejta-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten

fejta-bot avatar Jun 14 '21 00:06 fejta-bot

Like I have mentionion before I have no strong preference where the files are as long as they are versioned per k8s version. This would allow us to make changes in the latest version without affecting older versions.

/remove-lifecycle rotten

neolit123 avatar Jun 14 '21 13:06 neolit123

yeah, at the very least I think that is an important blocker to improving the kubeadm experience and it should be a trivial problem + solution.

I really think they should also be in-tree in Kubernetes though as well because it's the most straightforward solution to ensuring they're versioned with kubernetes (they can even get small fixes between patches!), and people not using the debians/rpms still need these files and it's not fun for them trying to figure out that they're buried deep in one of our 100s of repos. most projects with official systemd specs supply them alongside the tool they are used for imho.

BenTheElder avatar Jun 18 '21 03:06 BenTheElder

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 16 '21 03:09 k8s-triage-robot

/remove-lifecycle stale

neolit123 avatar Sep 16 '21 11:09 neolit123

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Dec 15 '21 12:12 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jan 14 '22 12:01 k8s-triage-robot

/lifecycle frozen

neolit123 avatar Jan 14 '22 12:01 neolit123

For context: last discussed this was blocked on kubernetes package building eternally being moving from the old tool to the new tool which is still not ready, changes like this are being blocked in the meantime.

If we reach a state where packaging changes are accepted again, this is a very straightforward change (just move the files to k/k and use them from k/k when present) that shouldn't take much work. I think it would benefit the kubeadm ecosystem a lot in particular, as we can finally ship https://github.com/kubernetes/release/pull/1352 upstream if nothing else.

BenTheElder avatar Jan 14 '22 19:01 BenTheElder