FFock
FFock
I have the same/simlar issue with Rancher 2.6.6, K8S v1.21.12, Ubuntu 20.04 (latest).
Same issue exists with Rancher 2.6.6, K8S **v1.22.10**, Ubuntu 20.04 (latest).
Meanwhile I tried to find a workaround, but failed so far. Our Rancher uses RKEv1 to deploy the downstream kubernetes cluster, which should get sysbox enabled. I tried to define...
I created the /etc/systemd/system/crio.service file with the config above (adapted of course), but the error is still the same: `Failed to create pod sandbox: rpc error: code = Unknown desc...
Pulling the k8s.gcr.io/pause:3.5 image on the node using docker pull does not fix the issue too. Is this initializing code downloading the image in a docker-in-docker environment?
The /etc/crio/crion.conf file is: ``` [crio] storage_driver = "overlay" storage_option = ["overlay.mountopt=metacopy=on"] [crio.network] plugin_dirs = ["/opt/cni/bin", "/home/kubernetes/bin"] [crio.runtime] cgroup_manager = "cgroupfs" conmon_cgroup = "pod" default_capabilities = ["CHOWN", "DAC_OVERRIDE", "FSETID", "FOWNER",...
Ok, thank you for explaining the background. Things are getting clearer now. The problem can be indeed reproduced quite simple: ``` sudo crictl pull k8s.gcr.io/pause:3.5 FATA[0000] pulling image: rpc error:...
Problem solved (with workaround): Adding these two lines to the `[Service]` section of `/etc/systemd/system/crio.service` fixes the issue*: _*`NO_PROXY` might be improved with CIDR notation - I do not know yet,...
My workaround described above using /etc/systemd/system/crio.service has apparently a serious drawback. After adapting the settings further and doing ``` $ sudo systemctl daemon-reload $ sudo systemctl restart crio ``` kubelet,...
There were several kubelet processes running and their numer seemed to be corresponding on the number of times I restarted crio. These instances survived reboots and might have been the...