cloud-provider-vsphere icon indicating copy to clipboard operation
cloud-provider-vsphere copied to clipboard

Project cloud-provider-vsphere has been deleted.

Open saintdle opened this issue 1 year ago • 2 comments

What happened?

Booting up cluster, unable to get CSI pods running which has taken cluster offline, when troubleshooting, unable to pull the image, testing with docker pull, get the following

❯ docker pull gcr.io/cloud-provider-vsphere/csi/release/driver:v3.3.0
Error response from daemon: Head "https://gcr.io/v2/cloud-provider-vsphere/csi/release/driver/manifests/v3.3.0": denied: Project cloud-provider-vsphere has been deleted.

What did you expect to happen?

Image to be available

How can we reproduce it (as minimally and precisely as possible)?

docker pull gcr.io/cloud-provider-vsphere/csi/release/driver:v3.3.0

Anything else we need to know (please consider providing level 4 or above logs of CPI)?

No response

Kubernetes version

❯ kubectl version
Client Version: v1.29.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.7
Kubecolor Version: v0.4.0

Cloud provider or hardware configuration

OS version

# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here

# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here

Kernel (e.g. uname -a)

Install tools

Container runtime (CRI) and and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

Others

saintdle avatar Sep 19 '24 13:09 saintdle

Images have been migrated to another registry. https://github.com/kubernetes/cloud-provider-vsphere#warning-kubernetes-image-registry-migration-for-cloud-provider-vsphere

hyorch avatar Oct 01 '24 06:10 hyorch

@hyorch thanks for pointing at the note about the new registry. I don't see the older releases (1.27.0 for example) nor do I quite understand where the various syncer, driver, or manager for the vsphere-storage drivers

effectively -- where did these images go?

gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.18.0
gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.19.0
gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.2.1
gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.20.0
gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.21.1
gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.22.3
gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.23.1
gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.24.2
gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.25.0
gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.26.0
gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.27.0
gcr.io/cloud-provider-vsphere/csi/release/driver:v2.5.3
gcr.io/cloud-provider-vsphere/csi/release/driver:v2.5.4
gcr.io/cloud-provider-vsphere/csi/release/driver:v2.6.0
gcr.io/cloud-provider-vsphere/csi/release/driver:v2.6.1
gcr.io/cloud-provider-vsphere/csi/release/driver:v2.6.2
gcr.io/cloud-provider-vsphere/csi/release/driver:v2.6.3
gcr.io/cloud-provider-vsphere/csi/release/driver:v2.7.0
gcr.io/cloud-provider-vsphere/csi/release/driver:v3.0.0
gcr.io/cloud-provider-vsphere/csi/release/driver:v3.0.2
gcr.io/cloud-provider-vsphere/csi/release/driver:v3.0.3
gcr.io/cloud-provider-vsphere/csi/release/driver:v3.1.0
gcr.io/cloud-provider-vsphere/csi/release/driver:v3.1.1
gcr.io/cloud-provider-vsphere/csi/release/driver:v3.1.2
gcr.io/cloud-provider-vsphere/csi/release/driver:v3.2.0
gcr.io/cloud-provider-vsphere/csi/release/driver:v3.3.0
gcr.io/cloud-provider-vsphere/csi/release/driver:v3.3.1
gcr.io/cloud-provider-vsphere/csi/release/syncer:v2.5.3
gcr.io/cloud-provider-vsphere/csi/release/syncer:v2.5.4
gcr.io/cloud-provider-vsphere/csi/release/syncer:v2.6.0
gcr.io/cloud-provider-vsphere/csi/release/syncer:v2.6.1
gcr.io/cloud-provider-vsphere/csi/release/syncer:v2.6.2
gcr.io/cloud-provider-vsphere/csi/release/syncer:v2.6.3
gcr.io/cloud-provider-vsphere/csi/release/syncer:v2.7.0
gcr.io/cloud-provider-vsphere/csi/release/syncer:v3.0.0
gcr.io/cloud-provider-vsphere/csi/release/syncer:v3.0.2
gcr.io/cloud-provider-vsphere/csi/release/syncer:v3.0.3
gcr.io/cloud-provider-vsphere/csi/release/syncer:v3.1.0
gcr.io/cloud-provider-vsphere/csi/release/syncer:v3.1.1
gcr.io/cloud-provider-vsphere/csi/release/syncer:v3.1.2
gcr.io/cloud-provider-vsphere/csi/release/syncer:v3.2.0
gcr.io/cloud-provider-vsphere/csi/release/syncer:v3.3.0
gcr.io/cloud-provider-vsphere/csi/release/syncer:v3.3.1

addyess avatar Oct 07 '24 15:10 addyess

@hyorch nice warning, but why are the manifests for those versions still referencing the old gcr.io images ? And is there a list/mapping of old to new images so i can replace them manually ?

These issues regarding the registry switch are now all two month old and still there is no well documented way to install the csi-driver using the new registry ?

This registry switch does not seem to be planned and executed well...

P.S. @hyorch after a second read this sounded like im blaming you :) sorry, this was meant to be a general rant on how bad this was handeled.

erSitzt avatar Nov 18 '24 09:11 erSitzt

I not working for Vsphere. I think they have deleted images from gcr.io before gcr.io shutdown planed for May 2025. https://cloud.google.com/artifact-registry/docs/transition/gcr-repositories

The problem is if you are using and old version of the containers and try just to change container version on your deployments. New releases needs new deployments parameters. So, if you just upgrde container images version on old deployments, your deployments will not work. You have to upgrade both: container versions and deployment paramentes (Statefullset, Deployments, etc). You can review new parameters/configuration on GitHub Kubernetes-Sig page: https://github.com/kubernetes-sigs/vsphere-csi-driver/tree/master/manifests/vanilla

hyorch avatar Nov 18 '24 09:11 hyorch

@hyorch yeah i know :)

i just saw that the other repository has better info on the change and the new images... and correct manifests

erSitzt avatar Nov 18 '24 10:11 erSitzt

Sorry that CPI images below v1.28.0 were lost during the registry switch. We have a plan to restore them but before that you can use mirrors from dockerhub like https://hub.docker.com/r/rancher/mirrored-cloud-provider-vsphere-cpi-release-manager/tags.

DanielXiao avatar Nov 19 '24 02:11 DanielXiao

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 17 '25 02:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Mar 19 '25 03:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Apr 18 '25 04:04 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Apr 18 '25 04:04 k8s-ci-robot