minikube unable to enable registry / `gcr.io/k8s-minikube/kicbase:v0.0.44` suddenly missing
What Happened?
We have a CI job for Docker & Helm testing on GitHub, for which we need a registry. This job started to fail around Aug 29, 00:00 UTC (at least within the last 10 hours).
It seems that culprit here is this one minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.44, but successfully downloaded docker.io/kicbase/stable:v0.0.44 as a fallback image - at least that's the only difference I can notice.
Another difference (I suspect that's a consequence of the former?) is that the registry addon could not be verified:
Thu, 29 Aug 2024 02:39:48 GMT * Verifying registry addon...
Thu, 29 Aug 2024 02:45:48 GMT ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
This is the minikube restart command.
The last two successful runs are this and this - the Setup Docker registry steps are the interesting ones.
The current failed runs are here and here - the Setup Docker registry steps are the interesting ones.
Logs of the "successful" minikube start:
* minikube v1.33.1 on Ubuntu 22.04
* Automatically selected the docker driver. Other choices: podman, none, ssh
* Using Docker driver with root privileges
* Starting "minikube" primary control-plane node in "minikube" cluster
* Pulling base image v0.0.44 ...
* Creating docker container (CPUs=2, Memory=3900MB) ...
* Preparing Kubernetes v1.30.0 on Docker 26.1.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
- Using image docker.io/registry:2.8.3
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying registry addon...
* Enabled addons: storage-provisioner, default-storageclass, registry
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Logs of the "failed" ones:
* minikube v1.33.1 on Ubuntu 22.04
* Automatically selected the docker driver. Other choices: podman, none, ssh
* Using Docker driver with root privileges
* Starting "minikube" primary control-plane node in "minikube" cluster
* Pulling base image v0.0.44 ...
! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.44, but successfully downloaded docker.io/kicbase/stable:v0.0.44 as a fallback image
* Creating docker container (CPUs=2, Memory=3900MB) ...
* Preparing Kubernetes v1.30.0 on Docker 26.1.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image docker.io/registry:2.8.3
- Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
* Verifying registry addon...
! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Note: the actual minikube start doesn't fail, but it doesn't enable the registry, which lets a later CI step fail (because there's no registry).
Attach the log file
see snippets and links above
Operating System
Ubuntu
Driver
None
gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2 is also unavailable for me (with a 403) which I suppose roots in a similar problem
Yea - if I open it in a browser, Google CR says You need additional access :shrug:
Um - minikube no longer starts on my local machine (with podman):
$ minikube start --driver=podman --container-runtime=cri-o --insecure-registry="192.168.49.2:5000" --addons=registry --memory="8192" --cpus=8
😄 minikube v1.33.1 on Ubuntu 24.04
✨ Using the podman driver based on user configuration
📌 Using Podman driver with root privileges
👍 Starting "minikube" primary control-plane node in "minikube" cluster
🚜 Pulling base image v0.0.44 ...
E0829 10:23:58.768164 70159 cache.go:189] Error downloading kic artifacts: not yet implemented, see issue #8426
🔥 Creating podman container (CPUs=8, Memory=8192MB) ...
🤦 StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for minikube container: sudo -n podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.44 -d /var/lib: exit status 125
stdout:
stderr:
Trying to pull gcr.io/k8s-minikube/kicbase:v0.0.44...
Error: initializing source docker://gcr.io/k8s-minikube/kicbase:v0.0.44: reading manifest v0.0.44 in gcr.io/k8s-minikube/kicbase: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
🔄 Restarting existing podman container for "minikube" ...
😿 Failed to start podman container. Running "minikube delete" may fix it: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125
stdout:
stderr:
Error: no container with name or ID "minikube" found: no such container
❌ Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125
stdout:
stderr:
Error: no container with name or ID "minikube" found: no such container
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
Same for me. It looks like the whole project gcr.io/k8s-minikube was either removed or hidden under authorization.
If you manage to get minikube started, and your only issue is enabling the registry add-on, then this hack with overriding the kube-registry-proxy image might help https://github.com/kubernetes/minikube/issues/19533#issuecomment-2317015677
Another workaround for the minikube start portion: If minikube start is not failing back to the docker.io version of kicbase then you can force it to do so by adding --base-image=docker.io/kicbase/stable:v0.0.44 to minikube start
When enabling the registry addon you can fix the proxy image it uses with minikube addons enable registry --images="KubeRegistryProxy=gcr.io/google_containers/kube-registry-proxy:0.4"
Had the same issue pulling
gcr.io/k8s-minikube/kube-registry-proxy:0.0.6@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367
and it's literally just become available again. Maybe try pulling the image again, to see if it's fixed for you as well?
See https://github.com/kubernetes/minikube/issues/19541, this should now be resolved.
See #19541, this should now be resolved.
@spowelljr thanks for the explanation and the fix!
Everything's working fine for us - both my local machine and our CI are happy!