k8s.io
k8s.io copied to clipboard
Migrate away from google.com gcp project kubebuilder
Part of umbrella issue to migrate away from google.com gcp projects: https://github.com/kubernetes/k8s.io/issues/1469
We didn't notice this because it doesn't show up anywhere in kubernetes/test-infra, but apparently kubebuilder uses a google-internal project (ref: https://github.com/kubernetes/k8s.io/issues/1469#issuecomment-909658529)
We need help from someone from the subproject who has access to scope out what exactly is used and determine how best to migrate, e.g.
- maybe this can be satisfied by a k8s-staging-kubebuilder project?
- maybe the staging project needs some special-case functionality enabled?
- eg.
/wg k8s-infra /sig api-machinery /priority important-soon /milestone v1.23
/assign @leilajal I'm assigning to you to help with scoping out how the internal project is used and what needs to be moved over. Feel free to reassign to someone who is more familiar with kubebuilder's release process
/milestone clear /help wanted
/cc @kevindelgado
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale /lifecycle frozen /remove-priority /priority blacklog
@ameukam: The label(s) priority/blacklog
cannot be applied, because the repository doesn't have them.
In response to this:
/remove-lifecycle stale /lifecycle frozen /remove-priority /priority blacklog
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/priority backlog
Hi folks,
Following is what kubebuilder and controller-runtime use from google cloud currently:
- a) We use it to build and provide the images https://console.cloud.google.com/gcr/images/kubebuilder/GLOBAL/kube-rbac-proxy
- b) We use it to build and provide the kubebuilder-tools (which has the artefacts to be used within envtest from controller-runtime): https://storage.googleapis.com/kubebuilder-tools
So, if it needs to be changed, we need to align and see how that should be changed, etc. Please, feel free to reach out to us via slack (channel kubebuilder, please feel free to ping me and @varshaprasad96 ). Note that all projects consume both, so no longer having those available would be a breaking change that impacts everybody.
The name of the project in the Google Cloud is kubebuilder
By the way, we need help accessing the Google Cloud because I had access to keep those maintained, but the email used for it no longer exists, and I am unable to reach out @kevindelgado, who helped us to change the email.
c/c @rpkatz
So sorry for the delay while I was on paternity leave.
I just granted @camilamacedo86 the same access she had before for the project and her new email. For future reference, my managers @fedebongio and @leilajal also have ownership of the kubebuilder GCP project, in case any one of us can't be reached.
I wanted to document the most recent status:
-
We've begun updating the kube-rbac-proxy image. However, the only reason this image is being generated by us is because the project isn't under a sig (https://github.com/kubernetes/test-infra/pull/29351/files and https://github.com/kubernetes-sigs/kubebuilder/pull/3362). Ideally, we shouldn't have to re-tag their image but rather consume the one they provide. We're awaiting the completion of its donation so that we can use their official image. More details can be found here: https://github.com/brancz/kube-rbac-proxy/issues/238. Therefore, I would like to advogated in favor of kubebuilder no longer be responsable for re-tag the images and we deprecated/remove those logic asap the kube-rbac-proxy project be able to provide the images to us. c/c @bihim
-
After this step, we'll still require GCP to produce the binaries used by the env test (kubebuilder-tools). Further information can be found here: https://book.kubebuilder.io/reference/artifacts.html
From the current outlook, it seems unlikely that we will be able to avoid using GCP, unless there's a proposal to change the aforementioned binaries.
More info: https://github.com/kubernetes-sigs/kubebuilder/issues/3230
Just an update here:
We revisited this issue and identified two specific needs for continuing to use Google Cloud, as detailed in these discussions: GitHub issue #2647 comment and Slack message.
We attempted to proceed with migrating the kube-rbac-proxy image as per this link: Google Cloud Registry link. However, we encountered a hurdle due to the project not being under Kubernetes-SIG. We're actively working on resolving this.
We would look into what we can do about the binaries for envtest from the controller-runtime / controller-tools side. I think it makes sense to move them over in some way to controller-runtime / controller-tools where setup-envtest already is.
It's crucial to emphasize that, should we find ourselves in a position to generate or promote artifacts elsewhere, it still really important we keep GCP for a long period running and promoting the artifcts that were generated in the past so that we do not broke the projects and give a good grace of time for projects be able to change.
In the case of the kube-rbac-proxy
if we stop to promote it in the current location it will be very critical and will broke a lot of projects in production that are using it.
/re-open
/open
/reopen
@sbueringer: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
It looks to me like so far:
- we have plans to sort out the binaries
- we are phasing out the kube-rbac-proxy image
Are there any other container images that should be considered? I see many other in gcr.io/kubebuilder ...
Hi @BenTheElder,
You are right. For the Kubebuilder project itself, those are still in used (kube-rbac-proxy and env-test binaries) and we have a plan. Beyond that, the majority of what we have there is from older versions (so should be fined and seems no longer used for a long time).
However, I took another look in the GCP and found that we are using the project to generate the image gcr.io/kubebuilder/pr-verifier:$TAG_NAME
. This image, generated by the project https://github.com/kubernetes-sigs/kubebuilder-release-tools, is used in the CI to verify PR emojis for release notes. I think @vincepri and @sbueringer are who has been taking care of https://github.com/kubernetes-sigs/kubebuilder-release-tools.
So, we need a plan as well for this image since it will no longer be available after March 18, 2025
.
"taking care" is relative :) (I'm not even reviewer there :joy:) But good point, didn't think about this one. I think there is no reason why it couldn't use the regular image promotion
Hi @sbueringer,
I think the plan with would use the shared e2e infra and the regular image promotion too. +1
Quick update on the setup-envtest / envtest binary situation.
PR merged to retrieve the envtest binaries from controller-tools releases: https://github.com/kubernetes-sigs/controller-runtime/pull/2811 Also cherry-pick to CR release-0.18: https://github.com/kubernetes-sigs/controller-runtime/pull/2837
This means:
- [email protected] will retrieve the binaries from GCS per default, but allows to download from controller-tools releases (I didn't want to change the default behavior on a release branch / patch release)
- setup-envtest@main/latest/ upcoming 0.19 (release ~ in August) will download from controller-tools releases per default, while still allowing from the GCS bucket