Tests fail with "You have reached your unauthenticated pull rate limit"
What Happened?
Looking at many failures for https://github.com/kubernetes/minikube/pull/20720:
- Docker_Linux_containerd_arm64 65 errors
- Docker_Linux_crio 91 errors
- Docker_Linux_crio_arm64 0 errors
- Docker_Linux_docker_arm64 0 errors
- KVM_Linux_containerd 160 errors
- KVM_Linux_crio 0 errors
We need to pull images from other registry that cannot with rate limit.
In ramen we use quay.io to avoid this. Another solution is local registry running in the CI environment.
We have 2 issues:
- Use from the test host: this was mostly fixed by logging to docker.io in the test script
- Use inside the guest: this requires providing a pull secret for all namespaces, or hopefully configuring the internal container runtime with docker.io credentials.
@medyagh suggested to do this before using the guest:
minikube ssh -- docker login docker.io --username ... --password ...
We will need different command for different container runtime, so maybe we need:
minikube login ...
that will do the right thing. Or maybe we can inject the docker credentials using ~/.minikube/files?
The second issue discussed in slack: https://kubernetes.slack.com/archives/C1F5CT6Q1/p1753130576600499
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/kind failing-test
/kind bug
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
@medyagh can you stop the lifecycle bot trying to close this? This should be top priority and cannot be closed without fixing it.