apiserver-network-proxy icon indicating copy to clipboard operation
apiserver-network-proxy copied to clipboard

Release v0.1.0?

Open andrewsykim opened this issue 3 years ago • 10 comments

Up to this point we've been making releases against master and haven't bumped the minor version yet. This was fine in the earlier stages of development, but as we're approaching a more steady rate of features, refactors and bug fixes it might be useful to start managing release branches and cut a v0.1.0 release where we only backport bug fixes.

I personally think that now is a good time to cut v0.1.0 since we fixed various memory leak bugs recently and there are also some PRs open that woudl be a better fit for v0.2.0 (https://github.com/kubernetes-sigs/apiserver-network-proxy/pull/343, https://github.com/kubernetes-sigs/apiserver-network-proxy/pull/342, https://github.com/kubernetes-sigs/apiserver-network-proxy/pull/310)

andrewsykim avatar Mar 17 '22 13:03 andrewsykim

@cheftako @rata @mihivagyok thoughts?

andrewsykim avatar Mar 17 '22 13:03 andrewsykim

@andrewsykim I think it is a good idea, but of course it will imply more overhead.

I think, for simplicity I'd prefer to continue to release konn-client and proxy-server/agent in lockstep, so we don't have to select which patches are safe with older konn-client versions and which aren't, etc. Just have one k8s release use one konn-client minor (let's say 0.1.0) that will be maintained alongside proxy-{server,agent} 0.1.0 for patch releases. Maybe the next k8s release uses konn-client 0.2.0, and so on.

I'd say, if we see the need to have some proxy-server features sooner than what people upgrades k8s versions, we can consider more complex things. But I'd say, let's do that on a need basis and start with something simple like releasing konn-client and proxy-{server,agent} on a lockstep and not support mixing different minors between them.

We will need to have CI for each supported version, some automation to create a release branch, not sure what else (I'm not familiar with how releases are created). Anything else?

rata avatar Mar 17 '22 14:03 rata

Proposal on how to proceed with this:

  • Cut a release-0.1.0 branch off of master now
  • Merge any outstanding bug fixes to master and also backport to release-0.1
  • Cut v0.1.0 off of release-0.1
  • Start merging any approved feature PRs to master

andrewsykim avatar Apr 11 '22 19:04 andrewsykim

SGTM!

rata avatar Apr 12 '22 10:04 rata

@jkh52 @cheftako thoughts?

andrewsykim avatar Apr 13 '22 15:04 andrewsykim

I may be in minority but: I would keep things as-is, and make sure we immediately revert / stabilize if tests show impact.

If we do branch: what's the lifetime of these branches? i.e. when would 0.1.0 be stale / unsupported? Is there a correspondence to k/k minor version?

jkh52 avatar Apr 18 '22 18:04 jkh52

/assign @cheftako

jkh52 avatar May 13 '22 21:05 jkh52

I agree to some extent with @jhk52

Perhaps more importantly comb issues/PRs and tag them whether they are breaking changes/ priority/impact and then decide whether it makes sense to cut a v0.1.0 release ?

ipochi avatar May 19 '22 11:05 ipochi

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 17 '22 12:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Sep 16 '22 12:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Oct 16 '22 13:10 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Oct 16 '22 13:10 k8s-ci-robot

/reopen /lifecycle frozen

jkh52 avatar Oct 17 '22 20:10 jkh52

@jkh52: Reopened this issue.

In response to this:

/reopen /lifecycle frozen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Oct 17 '22 20:10 k8s-ci-robot

/assign @jkh52

I want to wait for https://github.com/kubernetes-sigs/apiserver-network-proxy/pull/445 to merge, then I would create v0.1.0 tags on commit ff645a8789becd9e6d7f1b674228619dccd60361 (to have parity with 0.0.35).

jkh52 avatar Dec 29 '22 19:12 jkh52

v0.1.0 tags are created.

We should push this to k/k master, to gain coverage of ANP master branch. But it could be reasonable to wait until the next interesting feature (agent memory leaks fixes, hopefully).

jkh52 avatar Jan 10 '23 23:01 jkh52