Migrate away from google.com gcp project gke-release
Part of umbrella issue to migrate away from google.com gcp projects: https://github.com/kubernetes/k8s.io/issues/1469
Part of umbrella to migrate kubernetes e2e test images/registries to community-owned infrastructure: https://github.com/kubernetes/k8s.io/issues/1458
There are a variety of gcr.io/gke-release images referenced in kubernetes/kubernetes CI (and sundry kubernetes-sigs projects). These should be migrated to k8s.gcr.io where possible.
Suggested path forward:
- Gather the list of images from cs.k8s.io: https://cs.k8s.io/?q=gcr.io%2Fgke-release&i=nope&files=&repos=
- For each image
- determine whether there is an equivalent in k8s.gcr.io
- if not, work with subproject owner to setup a staging project if they need one; build / stage / promote to k8s.gcr.io
- change reference to images to k8s.gcr.io
- announce deprecation window
- flag project for deletion post-deprecation window
/wg k8s-infra /area artifacts /sig testing /sig release /sig storage (I see a number of csi images) /sig windows (I see a pause-win image) /area release-eng
/milestone v1.21
We've already added Windows support to the pause image, and it has already been built and promoted. Here's a PR that will replace the windows pause image references: https://github.com/kubernetes/kubernetes/pull/98205
@spiffxp I can help here will assign to myself to start the work
/assign
Some questions:
The following images do not exist in the community registry and it is defined in this file but the tags do not exist on those files
gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:a75ec0caa9e3038bd9886b3f36641a624574ff34b064974de6ee45048de3372bgcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v1.0.1-gke.0
Will be easier we just copy those images to the community registry? or where this is defined so we can set up the appropriate jobs to build those images and push them to the correct place.
And the final one is related to some go files
those files: https://github.com/kubernetes/kubernetes/blob/master/test/utils/image/manifest_test.go and https://github.com/kubernetes/kubernetes/blob/master/test/utils/image/manifest.go
define the gcr.io/gke-release what should be the replacement for that?
cc @spiffxp if you can help me on that or point who can help
mentioning @LappleApple for visibility and maybe for future support.
/milestone v1.22
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale Not yet sure what the state of this is
I will get back to this, it is missing one image, that I need to follow up
Updated the PR https://github.com/kubernetes/kubernetes/pull/100294 and after we merge that we can close this issue
found two more, the others are in the vendors directory or is a legacy that we will not change
/milestone v1.23 We're real real close, I'll check out cs.k8s.io at some point in the next week
https://cs.k8s.io/?q=gcr.io%2Fgke-release&i=nope&files=&excludeFiles=vendor%2F&repos=
Repos that still reference gcr.io/gke-release:
- [x] kubernetes-sigs/gcp-compute-persistent-disk-csi-driver (https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/pull/836)
- [ ] kubernetes-sigs/gcp-filestore-csi-driver (https://github.com/kubernetes-sigs/gcp-filestore-csi-driver/pull/174)
- [x] kubernetes/cloud-provider-gcp (https://github.com/kubernetes/cloud-provider-gcp/pull/249)
will work on those
PR for kubernetes-sigs/gcp-compute-persistent-disk-csi-driver: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/pull/836
PR for kubernetes-sigs/gcp-filestore-csi-driver: https://github.com/kubernetes-sigs/gcp-filestore-csi-driver/pull/174
@spiffxp I dont see anything left for cloud-provider-gcp repo
https://cs.k8s.io/?q=gcr.io%2Fgke-release&i=nope&files=&excludeFiles=vendor%2F&repos=kubernetes/cloud-provider-gcp
am I missing something?
am I missing something?
https://github.com/kubernetes/cloud-provider-gcp/pull/249 merged which took care of that repo
We're waiting on https://github.com/kubernetes-sigs/gcp-filestore-csi-driver/pull/174 before we can call this done
/milestone v1.24
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/milestone v1.25
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale /milestone v1.26
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale /milestone v1.27
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle stale