apiserver-network-proxy
                                
                                 apiserver-network-proxy copied to clipboard
                                
                                    apiserver-network-proxy copied to clipboard
                            
                            
                            
                        Release v0.1.0?
Up to this point we've been making releases against master and haven't bumped the minor version yet. This was fine in the earlier stages of development, but as we're approaching a more steady rate of features, refactors and bug fixes it might be useful to start managing release branches and cut a v0.1.0 release where we only backport bug fixes.
I personally think that now is a good time to cut v0.1.0 since we fixed various memory leak bugs recently and there are also some PRs open that woudl be a better fit for v0.2.0 (https://github.com/kubernetes-sigs/apiserver-network-proxy/pull/343, https://github.com/kubernetes-sigs/apiserver-network-proxy/pull/342, https://github.com/kubernetes-sigs/apiserver-network-proxy/pull/310)
@cheftako @rata @mihivagyok thoughts?
@andrewsykim I think it is a good idea, but of course it will imply more overhead.
I think, for simplicity I'd prefer to continue to release konn-client and proxy-server/agent in lockstep, so we don't have to select which patches are safe with older konn-client versions and which aren't, etc. Just have one k8s release use one konn-client minor (let's say 0.1.0) that will be maintained alongside proxy-{server,agent} 0.1.0 for patch releases. Maybe the next k8s release uses konn-client 0.2.0, and so on.
I'd say, if we see the need to have some proxy-server features sooner than what people upgrades k8s versions, we can consider more complex things. But I'd say, let's do that on a need basis and start with something simple like releasing konn-client and proxy-{server,agent} on a lockstep and not support mixing different minors between them.
We will need to have CI for each supported version, some automation to create a release branch, not sure what else (I'm not familiar with how releases are created). Anything else?
Proposal on how to proceed with this:
- Cut a release-0.1.0 branch off of master now
- Merge any outstanding bug fixes to master and also backport to release-0.1
- Cut v0.1.0 off of release-0.1
- Start merging any approved feature PRs to master
SGTM!
@jkh52 @cheftako thoughts?
I may be in minority but: I would keep things as-is, and make sure we immediately revert / stabilize if tests show impact.
If we do branch: what's the lifetime of these branches? i.e. when would 0.1.0 be stale / unsupported? Is there a correspondence to k/k minor version?
/assign @cheftako
I agree to some extent with @jhk52
Perhaps more importantly comb issues/PRs and tag them whether they are breaking changes/ priority/impact and then decide whether it makes sense to cut a v0.1.0 release ?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with /remove-lifecycle stale
- Mark this issue or PR as rotten with /lifecycle rotten
- Close this issue or PR with /close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with /remove-lifecycle rotten
- Close this issue or PR with /close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with /reopen
- Mark this issue as fresh with /remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen /lifecycle frozen
@jkh52: Reopened this issue.
In response to this:
/reopen /lifecycle frozen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/assign @jkh52
I want to wait for https://github.com/kubernetes-sigs/apiserver-network-proxy/pull/445 to merge, then I would create v0.1.0 tags on commit ff645a8789becd9e6d7f1b674228619dccd60361 (to have parity with 0.0.35).
v0.1.0 tags are created.
We should push this to k/k master, to gain coverage of ANP master branch. But it could be reasonable to wait until the next interesting feature (agent memory leaks fixes, hopefully).