cluster-api-provider-nested
cluster-api-provider-nested copied to clipboard
✨ [VC] Support Staging Releases for VC code base
User Story
As a user I would like to be able to use VC with CAPN from Prow based builds for so that I can use automated and "blessed" images.
Detailed Description
Right now CAPN is released every main branch merge plus, this is done using the make release-staging target, we should update these downstream targets to add CAPN into this.
Anything else you would like to add:
This doesn't need to deal with the manifests, this will be handled in the make release changes.
/kind feature /milestone v0.1.x
/retitle ✨ [VC] Support Staging Releases for VC code base
@christopherhein Want to confirm with you that: the target here is to:
Whenever there is code merge in virtualcluster, the image build should also trigger from CAPN by make release?
Because you said: this will be handled in the make release changes.
But what if there is no release at the time when the code changes happened in virutalcluster?
Should we also handle this for staging release?
And btw, I did not find any steps for prod images pushing. The release tag just updates the manifests. Did I miss anything?
The target of this issue is to handle the staging releases similar to what we have in capn/Makefile under the target make staging-release but integrated to work against VC code base, this will make it so that every merge into main will get a sha based release that we can then verify with. For the prod releases it's actually not automated via prow, what we do is follow the guide https://github.com/kubernetes-sigs/cluster-api/blob/master/docs/developer/releasing.md to make sure we tag the release locally and then rerun the make release target to generate the production ready manifests, once those are built and the stage was pushed we manually make a Github release and upload the production manifests that were generated locally.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale /lifecycle frozen