cluster-api
cluster-api copied to clipboard
Resolve release series labels in e2e config
User Story
As a developer I would like to have an easily consumable way to find out the latest stable release of a CAPI release series (e.g. v0.3.x or v0.4.x).
Detailed Description Kubernetes currently exposes the latest stable releases (overall or per release series) via:
- http://dl.k8s.io/release/stable.txt
- http://dl.k8s.io/release/stable-1.21.txt
This is useful when consuming Kubernetes for example in CI. We currently have the same use case as we want to reference the latest clusterctl binary of the v0.3.x release series in our clusterctl upgrade e2e test (xref: https://github.com/kubernetes-sigs/cluster-api/pull/4995#issuecomment-884939678)
[A clear and concise description of what you want to happen.]
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind feature
I like the idea. At least in first iteration marker files can be stored in the same buckets we are using for nightly build artefacts
/area testing even if this could have different applications
Is there any automation we could reuse?
/priority important-longterm
I have zero idea how they do it and where to start looking. If I would have to guess, those files are updated when publishing the actual k/k release (but I don't know anything about all this stuff). @dims?
@sbueringer i can dig things up, but probably easier for @palnabarun as knows a lot more of the release stuff.
/milestone Next
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/lifecycle frozen
/assign @sonasingh46
@randomvariable: GitHub didn't allow me to assign the following users: sonasingh46.
Note that only kubernetes-sigs members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
In response to this:
/assign @sonasingh46
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@sonasingh46 if you want, you can assign yourself to the issue (this should work also if you are not an org member yet)
/assign sonasingh46
@sonasingh46 Have you had time to work on this?
Hey @killianmuldoon -- I did not get a chance to start on this. I can start looking on this one next week.
If you're still willing to pick it up that would be awesome 😄 . We've got a lot of point release updates to do today, and something like this would really help automate the process!
Sorry I missed the ping here!
We update the version markers during every release cut. krel (the Kubernetes release tooling) has a step in the process to update the marker file inside a publicly viewable GCS bucket. dl.k8s.io is just a redirect to that GCS bucket URL.
You can have a look at the code here: https://github.com/kubernetes/release/blob/f55d5af19f1fab3e8cf6832abf331f95452a342d/pkg/release/publish.go#L155
My suggestions for a detailed plan here would be:
- Request a GCS bucket from sig-k8s-infra.
- Write some code using release.Publisher(...) to publish the marker to the GCS bucket.
- Run a Prow Job for every pushed tag / integrate code in (2) with the existing mechanism of releasing CAPI.
Let me know in case you need any help! Happy to review stuff.
@palnabarun -- Thank you very much. Will jump on to this one and reach out for help and reviews.
After digging into this a little bit it comes to me that version resolution is relative to a provider (CAPI latest != CAPV latest)
Given this, using version marker in a GCS bucket gets complicated, because each provider should take care of changing the release process to publish them; therefore I'm proposing to use an alternative method to discover version info, similar to the one implemented in https://github.com/kubernetes-sigs/cluster-api/blob/535219878ced6ce483c87fe49b4a756c0f83a85a/hack/tools/mdbook/releaselink/releaselink.go#L65-L98
I also suggest the following UX in the docker.yaml file:
when configuring providers:
- name: cluster-api
type: CoreProvider
versions:
- - name: v0.3.23 # latest published release in the v1alpha3 series; this is used for v1alpha3 --> v1beta1 clusterctl upgrades test only.
- value: "https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.23/core-components.yaml"
+ - name: "{goproxy://sigs.k8s.io/[email protected]}" # latest published release in the v1alpha3 series; this is used for v1alpha3 --> v1beta1 clusterctl upgrades test only.
+ value: "https://github.com/kubernetes-sigs/cluster-api/releases/download/{goproxy://sigs.k8s.io/[email protected]}/core-components.yaml"
when configuring clustectl binary in variables:
- INIT_WITH_BINARY: "https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.4.4/clusterctl-{OS}-{ARCH}"
+ INIT_WITH_BINARY: "https://github.com/kubernetes-sigs/cluster-api/releases/download/{goproxy://sigs.k8s.io/[email protected]}/clusterctl-{OS}-{ARCH}"
NOTE: I'm proposing an url like syntax because it makes things explicit and it also opens up to support alternatives way to discovery info if required
For starting we need to resolve the following type of markers:
- stable-X.Y: --> Latest release in the series without pre-releases e.g. 0.3.25
- latest-X.Y: --> Latest release in the series including pre-releases e.g. v1.1.0-rc0 (eventually, in the future we will add more, e.g. for nightly)
Ideally, markers should be resolved when reading the docker.yaml config file, so all the consumers of the E2E can benefit from marker resolution without changing tests.
Sounds perfect!
Sounds good to me! @fabriziopandini
Short note from this Slack post: https://kubernetes.slack.com/archives/C8TSNPY4T/p1643976829466139
I think we missed an edge case with the “version resolution logic”. We were assuming it’s enough to resolve the version in the e2e config file (CAPD: docker.yaml) when loading the file. But in case of INIT_WITH_BINARY we have to resolve the version in the clusterctl e2e test, because that config can be passed in:
- either as env var / docker.yaml
- or via ClusterctlUpgradeSpecInput
It doesn’t change a lot, only that we have to call the “version resolution func” also from the clusterctl upgrade test.
/retitle Resolve release series labels in e2e config /milestone v1.2
@sbueringer am I wrong or this has been already addressed?
My current information is that @sonasingh46 wanted to work on it.
I have not started to work on it yet. Was on PTO. Will prioritise it next week. Do we have a release deadline/blocker?
I have not started to work on it yet. Was on PTO. Will prioritise it next week. Do we have a release deadline/blocker?
As far as I'm aware no. It's a nice improvement for us and also for providers but we don't have any pressure to get it in soon.
Hey folks, did not get chance to start working on this and am occupied for couple of weeks. If anyone is interested meanwhile feel free to pick up this.
/triage accepted /help-wanted
(doing some cleanup on old issues without updates) /close Unfortunately no one is picking up the issue, we can look up at the idea above even if the issue is closed.