minikube
minikube copied to clipboard
minikube podman driver not supporting insecure registry
What Happened?
In has been found, when comparing the Podman driver (experimental) to the Docker driver and while utilizing the registry addon, that an insecure registry does not work, by default, with Podman.
minikube start --driver=podman --container-runtime=cri-o --addons=registry
I was able to workaround / confirm this was a minikube related issue by:
# get IP of registry
kubectl -n kube-system get service registry -o jsonpath='{.spec.clusterIP}'
# returned: 10.103.36.20
# modify minikubes registry
minikube ssh
sudo vi /etc/containers/registries.conf
# added and used IP from above:
[[registry]]
location = "10.103.36.20"
insecure = true
# restart crio
sudo systemctl restart crio
If I'm reading the logs correctly, it is showing it failing to access the image via https://, and then after adding in the change from above, it pulled the image successfully and the pod started successfully.
Attach the log file
Feb 26 16:56:50 minikube kubelet[1473]: E0226 16:56:50.671987 1473 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"integration\" with ImagePullBackOff: \"Back-off pulling image \\\"10.103.36.20/camel-k/camel-k-kit-cuvk3dgbo3fs73945emg@sha256:0548e6c998e3b6ca919b6687a0177705f31c74e7f34dcbcb3dbc4e3cce0b708e\\\"\"" pod="default/ticker-769cddbc48-p8rl5" podUID="b135a32a-c62e-4c3c-b1dd-f37a2016732b"
Feb 26 16:57:01 minikube kubelet[1473]: E0226 16:57:01.671257 1473 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"integration\" with ImagePullBackOff: \"Back-off pulling image \\\"10.103.36.20/camel-k/camel-k-kit-cuvk3dgbo3fs73945emg@sha256:0548e6c998e3b6ca919b6687a0177705f31c74e7f34dcbcb3dbc4e3cce0b708e\\\"\"" pod="default/ticker-769cddbc48-p8rl5" podUID="b135a32a-c62e-4c3c-b1dd-f37a2016732b"
Feb 26 16:57:13 minikube kubelet[1473]: E0226 16:57:13.672209 1473 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry 10.103.36.20: Get \"https://10.103.36.20/v2/\": dial tcp 10.103.36.20:443: connect: connection refused" image="10.103.36.20/camel-k/camel-k-kit-cuvk3dgbo3fs73945emg@sha256:0548e6c998e3b6ca919b6687a0177705f31c74e7f34dcbcb3dbc4e3cce0b708e"
Feb 26 16:57:13 minikube kubelet[1473]: E0226 16:57:13.672238 1473 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry 10.103.36.20: Get \"https://10.103.36.20/v2/\": dial tcp 10.103.36.20:443: connect: connection refused" image="10.103.36.20/camel-k/camel-k-kit-cuvk3dgbo3fs73945emg@sha256:0548e6c998e3b6ca919b6687a0177705f31c74e7f34dcbcb3dbc4e3cce0b708e"
Feb 26 16:57:13 minikube kubelet[1473]: E0226 16:57:13.672334 1473 kuberuntime_manager.go:1256] container &Container{Name:integration,Image:10.103.36.20/camel-k/camel-k-kit-cuvk3dgbo3fs73945emg@sha256:0548e6c998e3b6ca919b6687a0177705f31c74e7f34dcbcb3dbc4e3cce0b708e,Command:[java],Args:[-Xmx268M -cp ./resources:/etc/camel/application.properties:/etc/camel/resources:/etc/camel/resources.d/_configmaps:/etc/camel/resources.d/_secrets:/etc/camel/sources/camel-k-embedded-flow.yaml:dependencies/*:dependencies/app/*:dependencies/lib/boot/*:dependencies/lib/main/*:dependencies/quarkus/* io.quarkus.bootstrap.runner.QuarkusEntryPoint],WorkingDir:/deployments,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CAMEL_K_DIGEST,Value:v9zA3JmE0zhuu_FNaKSDBg7CEsDUf8tLxVxTlWvpozDE,ValueFrom:nil,},EnvVar{Name:CAMEL_K_CONF,Value:/etc/camel/application.properties,ValueFrom:nil,},EnvVar{Name:CAMEL_K_CONF_D,Value:/etc/camel/conf.d,ValueFrom:nil,},EnvVar{Name:CAMEL_K_VERSION,Value:2.6.0,ValueFrom:nil,},EnvVar{Name:CAMEL_K_OPERATOR_ID,Value:camel-k,ValueFrom:nil,},EnvVar{Name:CAMEL_K_INTEGRATION,Value:ticker,ValueFrom:nil,},EnvVar{Name:CAMEL_K_RUNTIME_VERSION,Value:3.15.2,ValueFrom:nil,},EnvVar{Name:CAMEL_K_MOUNT_PATH_CONFIGMAPS,Value:/etc/camel/conf.d/_configmaps,ValueFrom:nil,},EnvVar{Name:CAMEL_K_MOUNT_PATH_SECRETS,Value:/etc/camel/conf.d/_secrets,ValueFrom:nil,},EnvVar{Name:QUARKUS_CONFIG_LOCATIONS,Value:/etc/camel/application.properties,/etc/camel/conf.d/user.properties,ValueFrom:nil,},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {<nil>} 500m DecimalSI},memory: {{536870912 0} {<nil>} BinarySI},},Requests:ResourceList{cpu: {{125 -3} {<nil>} 125m DecimalSI},memory: {{134217728 0} {<nil>} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:i-source-000,ReadOnly:true,MountPath:/etc/camel/sources/camel-k-embedded-flow.yaml,SubPath:camel-k-embedded-flow.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:application-properties,ReadOnly:true,MountPath:/etc/camel/application.properties,SubPath:application.properties,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zs6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ticker-769cddbc48-p8rl5_default(b135a32a-c62e-4c3c-b1dd-f37a2016732b): ErrImagePull: pinging container registry 10.103.36.20: Get "https://10.103.36.20/v2/": dial tcp 10.103.36.20:443: connect: connection refused
Feb 26 16:57:13 minikube kubelet[1473]: E0226 16:57:13.672357 1473 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"integration\" with ErrImagePull: \"pinging container registry 10.103.36.20: Get \\\"https://10.103.36.20/v2/\\\": dial tcp 10.103.36.20:443: connect: connection refused\"" pod="default/ticker-769cddbc48-p8rl5" podUID="b135a32a-c62e-4c3c-b1dd-f37a2016732b"
Feb 26 16:57:26 minikube kubelet[1473]: E0226 16:57:26.671913 1473 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"integration\" with ImagePullBackOff: \"Back-off pulling image \\\"10.103.36.20/camel-k/camel-k-kit-cuvk3dgbo3fs73945emg@sha256:0548e6c998e3b6ca919b6687a0177705f31c74e7f34dcbcb3dbc4e3cce0b708e\\\"\"" pod="default/ticker-769cddbc48-p8rl5" podUID="b135a32a-c62e-4c3c-b1dd-f37a2016732b"
Feb 26 16:57:40 minikube kubelet[1473]: E0226 16:57:40.671134 1473 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"integration\" with ImagePullBackOff: \"Back-off pulling image \\\"10.103.36.20/camel-k/camel-k-kit-cuvk3dgbo3fs73945emg@sha256:0548e6c998e3b6ca919b6687a0177705f31c74e7f34dcbcb3dbc4e3cce0b708e\\\"\"" pod="default/ticker-769cddbc48-p8rl5" podUID="b135a32a-c62e-4c3c-b1dd-f37a2016732b"
Feb 26 16:57:52 minikube kubelet[1473]: E0226 16:57:52.670920 1473 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"integration\" with ImagePullBackOff: \"Back-off pulling image \\\"10.103.36.20/camel-k/camel-k-kit-cuvk3dgbo3fs73945emg@sha256:0548e6c998e3b6ca919b6687a0177705f31c74e7f34dcbcb3dbc4e3cce0b708e\\\"\"" pod="default/ticker-769cddbc48-p8rl5" podUID="b135a32a-c62e-4c3c-b1dd-f37a2016732b"
Feb 26 16:58:08 minikube kubelet[1473]: I0226 16:58:08.081582 1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/ticker-769cddbc48-p8rl5" podStartSLOduration=-9223370409.773207 podStartE2EDuration="27m7.081570047s" podCreationTimestamp="2025-02-26 16:31:01 +0000 UTC" firstStartedPulling="2025-02-26 16:31:01.732954716 +0000 UTC m=+1781.122532958" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-26 16:58:08.081433023 +0000 UTC m=+3407.471011279" watchObservedRunningTime="2025-02-26 16:58:08.081570047 +0000 UTC m=+3407.471148309"
Operating System
Redhat/Fedora
Driver
Podman
I also tried starting using the --insecure-registry option per documentation (https://minikube.sigs.k8s.io/docs/handbook/registry/):
minikube start --driver=podman --container-runtime=cri-o --addons=registry --insecure-registry "10.0.0.0/24"
No luck.
just like mentioned in this issue regarding cri-o not supporting insecure-registry , can you check if this works:
podman machine ssh --username root [optional-machine-name]
Edit the registries.conf file
(Add the insecure registry to the 'insecure-registries' list)
Restart the Podman machine podman machine restart [optional-machine-name]
Reference to above suggestion: https://podman-desktop.io/docs/containers/registries#setting-up-a-registry-with-an-insecure-certificate
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.