cri-dockerd
cri-dockerd copied to clipboard
should cri-dockerd be responsible for the misconfiguration of cgroup-driver between kubelet and dockerd
kubelet works normally with different cgroup-driver configuration with docker container runtime(via cri-dockerd). and should cri-dockerd checks the cgroupDriver misconfiguration between kubelet and dockerd?
For containerd, kubelet requires that the container runtime is updated to use the same driver (now, default is: systemd) https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/#configuring-the-kubelet-cgroup-driver
i.e. the container runtime starts with "cgroupfs" and kubernetes starts with "systemd", so not compatible out-of-the-box. The recommended configuration for quite some time (doubly so with rootless), is to change both over to use systemd.
@afbjorklund Thanks for your reply. I created a kubernetes(v1.24) cluster with docker container runtime via cri-dockerd, and docker configured with cgroup-driver=cgoupfs, kubelet's cgroup-driver=systemd, cgroupPerQos default to true. The cluster works well, but the cgroup hierarchy organized as follow:
Working directory /sys/fs/cgroup/cpu:
├─ 1 /sbin/init maybe-ubiquity
├─1685 bpfilter_umh
├─docker
│ └─ca440a47ea65f16b52fe7a5745efdcb9abd540991840308dfe1e7b69043d764e
│ └─2995433 /usr/bin/cadvisor -logtostderr
├─user.slice
│ ├─ 979 gdm-session-worker [pam/gdm-launch-environment]
│ ├─ 1035 /lib/systemd/systemd --user
│ ├─3807357 -bash
│ ├─4035102 /lib/systemd/systemd --user
│ └─4035108 (sd-pam)
├─system.slice
│ ├─irqbalance.service
│ │ └─734 /usr/sbin/irqbalance --foreground
│ ├─containerd.service
│ │ ├─ 824 /usr/bin/containerd
│ │ └─752 /usr/libexec/switcheroo-control
│ ├─uuidd.service
│ │ └─38657 /usr/sbin/uuidd --socket-activation
│ ├─rsyslog.service
│ └─750 /usr/sbin/rsyslogd -n -iNONE
└─kubepods.slice
├─kubepods-burstable.slice
│ ├─kubepods-burstable-podbb36663d18b4964d0384c1dde92b488d.slice
│ │ ├─a307de427fcb2ee094152109637ff3cab6a7fe4266fead7ce08786e0e83f4e93
│ │ │ └─634698 /pause
│ │ └─81c0dd97b60119b219e49de4488d7194849b3757ead20cf590446aa0b4d6cc2e
│ │ └─634771 kube-apiserver --advertise-address=10.235.30.76 --allow-privil…
│ ├─kubepods-burstable-pod5c95dda73fe61a49077ba12fc7c5781e.slice
│ │ ├─1da65b48685c324004454e924c909fe3d9b132aaa1074a7bcc5327e21214dc8a
│ │ │ └─3197 /pause
│ │ └─77625aced208cfec95c593c4be1f548b2ab6cacddfe90e9ad1a8cd4d477dc3fb
│ │ └─3596667 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/sc…
│ ├─kubepods-burstable-pod8c9eed8b2eb0231fbb5e44904b7ba938.slice
│ │ ├─58e7ef15818e78395d133acf1ff72c9d407844b845660dbbff0a822e85fc5607
│ │ │ └─3066 etcd --advertise-client-urls=https://10.235.30.76:2379 --cert-fi…
│ │ └─ba0fac27efbdced47258a510f1fcf0edf3891fcef8d55d5670cf7108818649eb
│ │ └─2882 /pause
... ...
Cluster works but pods' cgroup hierarchy goes against with configuration(cgroupPerQoS=true).