kubebuilder icon indicating copy to clipboard operation
kubebuilder copied to clipboard

Migrate away from google.com gcp project kubebuilder

Open camilamacedo86 opened this issue 2 years ago • 10 comments
trafficstars

What do you want to happen?

Description

Today we use GCP and have a project, kubebuilder. We need to move this infrastructure as described in this task: https://github.com/kubernetes/k8s.io/issues/2647. Currently, the GCP infra is used to build some artefacts.

kube-rbac-proxy images

These images are used to build a sidecar for the manager, see:

https://github.com/kubernetes-sigs/kubebuilder/blob/3044376bff38c796b1ea7f4ade862e0a621b74d7/testdata/project-v2/config/default/manager_auth_proxy_patch.yaml#L11-L13

To know how they are built, you can check it here.

kubebuilder-tools

The kubebuilder tools ship the required binaries to test the projects using EnvTest. We scaffold a target to download them in the projects as well:

https://github.com/kubernetes-sigs/kubebuilder/blob/3044376bff38c796b1ea7f4ade862e0a621b74d7/testdata/project-v4/Makefile#L57-L59

To know how they are built, you can check it here

Goal

This task aims to migrate the infrastructure used to build and provide them from the new location.

Motivation

To know more about see: https://kubernetes.io/blog/2020/05/27/an-introduction-to-the-k8s-infrastructure-working-group/

Impact

The projects built so far will still need to work on the images scaffold It is a breaking change and can indeed break the workloads running on the cluster (in the case of the proxy image).

So, we will need to work on providing those from the new places, see if we can also make a copy from all artefacts built and provide so for to the new place, and communicate asap.

IMPORTANT: We should not stop building in the old/current infrastructure until we stop using it. Therefore, that means we need to build in both for a time. I would also suggest that you build these artefacts in the new infrastructure.

What do we need to do

  • [x] - Create the new infrastructure - https://github.com/kubernetes/k8s.io/pull/4586
  • [x] - Add the jobs to start to build and produce the artefacts in the new infra : https://github.com/kubernetes/test-infra/pull/29351/files
  • [x] - Ensure that we will have a target to build the image using GCP and another using the new infra, see; https://github.com/kubernetes-sigs/kubebuilder/pull/3362
  • [ ] - Make a copy from all that is provided today in the current locations to the new one
  • [ ] - Update go/v3 and go/v4 scaffolds to gather the image from the new locations
  • [ ] - Update go/v3 and go/v4 makefile target to download the kubebuilder-tools from the new location
  • [ ] - Update kubebuilder docs to point out the new location
  • [ ] - Could you please check if the controller runtime will not be impacted and address the required changes? If they are using those artefacts for the tests, then those tests need to update to gather them from the new location.
  • [ ] - Communicate broadly to the community and ask for they to update their project to gather them from the new location
  • [ ] - disable the jobs that runs in the current infrastructure

References

https://github.com/kubernetes/k8s.io/issues/2647 https://github.com/kubernetes/k8s.io/pull/4586

Extra Labels

No response

camilamacedo86 avatar Feb 16 '23 19:02 camilamacedo86

I can start looking into this.

yashsingh74 avatar Feb 17 '23 07:02 yashsingh74

HI @yashsingh74,

Thank you for looking into that. But this one is not a good first issue to work on. We will need @rpkatz help to know how to use the new infra and how to promote the assets. You might able to help us out when we have those built, and then we can start to change the docs and scaffolds to use them.

camilamacedo86 avatar Feb 17 '23 07:02 camilamacedo86

/assign @rikatz

camilamacedo86 avatar Feb 17 '23 07:02 camilamacedo86

Hi there. I wrote something about it on a huge thread, and copy/pasted to https://docs.google.com/document/d/18EKmym3YJ0Ey3LOrQOeWh6RbnSO-odfKuMNVlHPHDmE/edit?usp=sharing (SORRY!!)

@yashsingh74 we have some subtasks here, as:

  • Create on https://github.com/kubernetes-sigs/kubebuilder/tree/kube-rbac-proxy-releases a new folder that will be used by the new cloudbuild. This way, we won't break the old one ;)
  • Create on https://github.com/kubernetes/test-infra/tree/master/config/jobs/image-pushing a new prowjob to monitor PRs merged on this branch/directory and trigger the cloudbuild. You can use https://github.com/kubernetes/test-infra/blob/master/config/jobs/image-pushing/k8s-staging-ingress-nginx.yaml#L1-L25 as an example

We are going to need to tweak some minor stuff on build script, like pushing to the right repository, etc, using proper substitutions on cloudbuild, etc.

@camilamacedo86 If you and @varshaprasad96 I would like to do the "image replication" work here, mostly getting the images from old repo and pushing to new staging one, so we won't start the manifest file promotion from scratch :)

rikatz avatar Feb 20 '23 21:02 rikatz

I wanted to document the most recent status:

  • We've begun updating the kube-rbac-proxy image. However, the only reason this image is being generated by us is because the project isn't under a sig (https://github.com/kubernetes/test-infra/pull/29351/files and https://github.com/kubernetes-sigs/kubebuilder/pull/3362). Ideally, we shouldn't have to re-tag their image but rather consume the one they provide. We're awaiting the completion of its donation so that we can use their official image. More details can be found here: https://github.com/brancz/kube-rbac-proxy/issues/238. Therefore, I would like to advogated in favor of kubebuilder no longer be responsable for re-tag the images and we deprecated/remove those logic asap the kube-rbac-proxy project be able to provide the images to us. c/c @bihim

  • After this step, we'll still require GCP to produce the binaries used by the env test (kubebuilder-tools). Further information can be found here: https://book.kubebuilder.io/reference/artifacts.html

From the current outlook, it seems unlikely that we will be able to avoid using GCP, unless there's a proposal to change the aforementioned binaries.

camilamacedo86 avatar Sep 01 '23 11:09 camilamacedo86

On kube-rbac-proxy feature, @sbueringer implemented https://github.com/kubernetes-sigs/controller-runtime/pull/2407 on latest controller-runtime.

I didnt had a chance to look at it yet but have seen some talks about krp not being required anymore with this feature :)

Maybe once kubebuilder supports the new runtime, there's no need to build krp anymore and simply migrate the old images.

rikatz avatar Sep 01 '23 11:09 rikatz

Ah sorry just saw your latest Pr To bump controller-runtime :))) great! So disregard my comment, but maybe we can not promote krp anymore in favor of the new feature :)

rikatz avatar Sep 01 '23 11:09 rikatz

@rikatz There's a lot of related information in this lengthy Slack thread: https://kubernetes.slack.com/archives/CAR30FCJZ/p1693377335373059

sbueringer avatar Sep 01 '23 12:09 sbueringer

The latest update of this one is:

  • Controller-Runtime maintainers are looking to try to build the envtest binaries in there (in controller-runtime or controller-tools)
  • We could not make the kube-rbac-proxy image rebuild work within the new infra because it is not under the k8s umbreally yet.

Further info: https://kubernetes.slack.com/archives/CCK68P2Q2/p1711913605487359

camilamacedo86 avatar Apr 09 '24 01:04 camilamacedo86

If there are any images other than kube-rbac-proxy that are not just being phased out, they need to be moved to registry.k8s.io sooner than later, there are docs at https://registry.k8s.io, specifically https://github.com/kubernetes/k8s.io/tree/main/registry.k8s.io#managing-kubernetes-container-registries

(We should just move them, rather than migrating to AR in a google.com internal project, we have plans in motion to migrate ~everything in Kubernetes to be solely on community controlled infra by EOY and GCR requires action by early next year anyhow. While technically we could move GCR to AR ~in-place, it makes more sense to switch to community controlled resources while we're at it)

BenTheElder avatar May 14 '24 20:05 BenTheElder

Conclusion and Latest Status

The GCP project must be active until March 18, 2025. After that date then theoretically Kubebuilder GCP project can be shotdown, Google has announced that images under gcr.io/kubebuilder will no longer be available. While Kubebuilder no longer promotes new images or artifacts through this repository, some projects built with older versions may still be using images like gcr.io/kubebuilder/kube-rbac-proxy in production environments.

Therefore, we must keep it running for as long as possible, until these older projects have fully migrated away. However, the goal of this issue was ensure that we migrate away from GCP project and from our side, everything has been done to discontinue the GCP usage.

Below are the details of what was done and the current status:

  • Kubebuilder releases are now managed via GoReleaser and GitHub Actions. For over 1 or 2 years, they no longer rely on GCP to trigger builds.
  • images from kube-rbac-proxy project used to protect metrics endpoints Projects initialized with Kubebuilder release versions >= [v3.15.0](https://github.com/kubernetes-sigs/kubebuilder/releases/tag/v3.15.0) are no longer scaffolded with the kube-rbac-proxy dependency. Instead, they use the built-in features provided by controller-runtime (filters.WithAuthenticationAndAuthorization) by default. You can see an example here. Furthermore, users are given an extra optional option to protect their metrics endpoint with network policies, as shown here. This change was communicated via the mailing list, Slack channels, release notes, and a discussion topic. For more details, see the discussion: https://github.com/kubernetes-sigs/kubebuilder/discussions/3907
  • kubebuilder-tools tarballs which provides the EnvTest binaries The artifacts provided via https://storage.googleapis.com/kubebuilder-tools, which contain binaries to allow users use the Controller-Runtime ENVTEST library to test their controllers, are now built and released via controller-tools. You can find the releases:https://github.com/kubernetes-sigs/controller-tools/releases. Since the release of controller-runtime v0.19, the setup-envtest downloads tarballs from the new location. Any projects generated with Kubebuilder >= [v4.2.0](https://github.com/kubernetes-sigs/kubebuilder/releases/tag/v4.2.0)—which adds support for controller-runtime v0.19 and Kubernetes 1.31—will use this new location. Projects using Kubernetes 1.31 for testing will need to adopt the new location, which ensures a smooth transition. This was communicated via the mailing list, Slack channels, release notes, and a discussion topic. For further information, see the discussion: https://github.com/kubernetes-sigs/kubebuilder/discussions/4082.
  • The image used in the GitHub Action to validate the PR title (from kubebuilder-release-tools) will also break once GCP images are unavailable. This project is not actively maintained, but Kubebuilder has implemented a simple solution to achieve the same functionality with GitHub Actions and shell scripts. The PR to update the documentation is here. Once this PR is merged, I am commit to share the update via the mailing list and channels to ensure everyone is informed.

Therefore, from our side, everything is ready. We just need to give projects enough time to migrate before we fully discontinue the GCP usage and I am closing this one.

camilamacedo86 avatar Sep 15 '24 15:09 camilamacedo86