Consider discontinuing the kubernetes-cni package
We should consider:
- removing dependency on the
kubernetes-cnipackage from other packages - completely discontinuing the
kubernetes-cnipackage after at least 2-3 release cycles
Having containernetworking/plugins installed used to be a requirement in the past, however, with CNIs plugins becoming wide spread, that's not the case any longer. That said, most of users do not need containernetworking/plugins installed, hence we shouldn't have a dependency on that package.
After verifying that the dependency is not needed, we should stop building the package. We have a policy against repacking and hosting 3rd party tools which is recapped here: https://github.com/kubernetes/k8s.io/issues/7708
/sig release /area release-eng /priority important-longterm
This also applies to cri-tools, that is no longer required (since 1.32)
- https://github.com/kubernetes/kubeadm/issues/3064
https://github.com/kubernetes-sigs/cri-tools
@xmudrii : Could be a separate issue though?
That said, most of users do not need containernetworking/plugins installed, hence we shouldn't have a dependency on that package.
It is still needed, but it is the responsibility of the container runtime
And the project doesn't re-package those either (they are third party)
https://kubernetes.io/docs/setup/production-environment/container-runtimes/
https://github.com/containerd/containerd/blob/v2.0.0/docs/containerd-2.0.md#cri-containerd-cni--version-os-archtargz-release-bundles-have-been-removed
EDIT: We are using the flannel CNI, it might not be needed for other CNI
I think it is the /opt/cni/bin/portmap that is required (and br_netfilter)
image: ghcr.io/flannel-io/flannel-cni-plugin:v1.8.0-flannel1
https://github.com/lima-vm/lima/blob/master/templates/k8s.yaml
containerd:
system: true
user: false
# cri-tools
apt-get install -y cri-tools
# cni-plugins
apt-get install -y kubernetes-cni
It's not necessarily needed even with the container runtime, there must be some CNI config and some binaries implementing that config if you don't only use hostNetwork, and those are commonly installed via a hostNetwork daemonset, which may use none of the plugins.
In the earlier days something like kubenet / dockershim may have needed these out of the box and had some default configuration.
cc @kubernetes/sig-network-leads
Is there a good way to figure out what versions to use, preferably matching what Kubernetes was tested with?
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
Before this change: (the version is provided in the repository, for each kubernetes version)
sudo apt-get install -y kubernetes-cni
After this change: (the version is provided in the documentation, and not always updated)
CNI_PLUGINS_VERSION="v1.3.0"
case $(uname -m) in
x86_64) ARCH="amd64";;
aarch64) ARCH="arm64";;
esac
DEST="/opt/cni/bin"
sudo mkdir -p "$DEST"
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_PLUGINS_VERSION}/cni-plugins-linux-${ARCH}-${CNI_PLUGINS_VERSION}.tgz" | sudo tar -C "$DEST" -xz
And the same for cri-tools, the major version should match k8s but the minor could be trickier to guess...
PS. This problem is not new, we had it also with the packages: (1.6.0 versus 1.7.1) https://github.com/kubernetes/release/issues/4100#issuecomment-3234166895
Is there a good way to figure out what versions to use, preferably matching what Kubernetes was tested with?
We intentionally do not advertise what versions the jobs happen to test with because we don't need infighting over testing every CNI solution X N versions, every CRI version, etc ... advertising the versions we tests invites pushing for your preferred version instead of us just using something stable that allows us to test.
There wouldn't be one singular version anyhow, and the binaries Kubernetes releases do not use CNI.
CNI is versioned and your CNI config specifies the version, so you don't really need tight coupling to what the container runtime was tested with either.
And the same for cri-tools, the major version should match k8s but the minor could be trickier to guess...
https://github.com/kubernetes-sigs/cri-tools#compatibility-matrix-cri-tools--kubernetes
It just felt like the installation instructions would grow even longer without the packages, but maybe it won't be so bad.
In my experience it is not tested at all, which was one of the reasons that I wanted to automate the README and get.k8s.io
And the same for cri-tools, the major version should match k8s but the minor could be trickier to guess...
https://github.com/kubernetes-sigs/cri-tools#compatibility-matrix-cri-tools--kubernetes
The matrix just says to use "master", but then says to match version:
| Kubernetes Version | cri-tools Version | cri-tools branch |
|---|---|---|
| ≥ 1.27.x | ≥ 1.27.x | master |
It's recommended to use the same cri-tools and Kubernetes minor version, because new features added to the Container Runtime Interface (CRI) may not be fully supported if they diverge.
The documentation for Debugging Kubernetes nodes with crictl says something similar:
https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md
Download the version that corresponds to your version of Kubernetes.
And containerd just says to get it from the release (so I guess the above):
https://github.com/containerd/containerd/blob/main/docs/cri/crictl.md
If you are a user, your deployment should have installed crictl for you. If not, get it from your release tarball.
Making sure to configure it, to avoid running into the old "fallback" issue...
- https://github.com/kubernetes-sigs/cri-tools/issues/893
runtime-endpoint: unix:///run/containerd/containerd.sock
Adopted from: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime
So I guess it will end something like:
KUBERNETES_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt | cut -d'.' -f1-2)
CRICTL_VERSION=$(curl -fsSL https://api.github.com/repos/kubernetes-sigs/cri-tools/releases/latest | jq -r .tag_name)
For the CNI plugins, can use nerdctl-full:
/opt/cni -> /usr/local/libexec/cni
and either "bridge"* or "flannel" plugins
Same as in the containerd documentation:
https://github.com/containerd/containerd/blob/main/docs/getting-started.md
* the old single-node setup, like https://github.com/containerd/containerd/blob/main/script/setup/install-cni
{
"cniVersion": "1.0.0",
"name": "containerd-net",
"plugins": [
{
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"promiscMode": true,
"ipam": {
"type": "host-local",
"ranges": [
[{
"subnet": "10.88.0.0/16"
}],
[{
"subnet": "2001:4860:4860::/64"
}]
],
"routes": [
{ "dst": "0.0.0.0/0" },
{ "dst": "::/0" }
]
}
},
{
"type": "portmap",
"capabilities": {"portMappings": true}
}
]
}
Adopted from: https://kubernetes.io/docs/concepts/cluster-administration/addons/#networking-and-network-policy
And yes, I still think that it is more complicated with CRI and CNI than what it used to be (with Docker, that is). 😀
But it is more configurable this way, and goes with the rest of the removal of the "in-tree" ways of doing things...
This also applies to cri-tools, that is no longer required (since 1.32)
@afbjorklund Let's keep cri-tools out of this discussion for now. Situation with cri-tools is much different, we don't need it as a dependency starting with https://github.com/kubernetes/kubeadm/issues/3064, but we can keep building it because it's a Kubernetes subproject. We can have some initial discussion about this in https://github.com/kubernetes-sigs/cri-tools/issues/1248
@kubernetes/sig-network-leads Can you please confirm that kubernetes-cni package (aka CNI plugins) is not needed any longer for at least most of setups? If that's the case, we can kick off some work on this issue.
i guess it would depend if a CNI plugin already bundles the binaries that the package provides. so if an existing CNI plugin doesn't bundle them and it assumes the user installs them manually, it would fail.
i am also interested in the current state of popular CNI plugins and whether we can drop the package and instruct the user they might have to install the binaries optionally and manually from github.
The existing cni/kubernetes network plugin landscape is more stable these days, and all of them AFAIK bundle their own dependencies.
Do we have statistics of the package downloads? that can give a much better idea of its actual usage
Do we have statistics of the package downloads? that can give a much better idea of its actual usage
Since it is a required dependency from kubelet now, I don't think the downloads reflect the actual usage?
I noticed when changing from packages to tarballs, that it also changed from xz to gz compression:
16M cri-tools_1.34.0-1.1_amd64.deb
39M kubernetes-cni_1.7.1-1.1_amd64.deb
55M total
19M crictl-v1.34.0-linux-amd64.tar.gz
54M cni-plugins-linux-amd64-v1.7.1.tgz
73M total
So the download is slightly bigger now, but I think I will take the opportunity to switch over to zstd...
@xmudrii The dependencies are still there in the 1.35 RC, so I guess the packages will continue to be downloaded for another release? They are also still using the older version, so it is testing with CRI 1.34 and CNI 1.7 (not 1.35 and 1.8)
cri-tools/unknown,now 1.34.0-1.1 amd64 [installed]
kubeadm/unknown 1.35.0~rc.1-1.1 amd64 [upgradable from: 1.34.3-1.1]
kubectl/unknown 1.35.0~rc.1-1.1 amd64 [upgradable from: 1.34.3-1.1]
kubelet/unknown 1.35.0~rc.1-1.1 amd64 [upgradable from: 1.34.3-1.1]
kubernetes-cni/unknown,now 1.7.1-1.1 amd64 [installed]
@afbjorklund That's correct, no changes for 1.35, we'll see if we can follow on this for 1.36.
For what it is worth, we have successfully removed both of them in the Lima template for k8s: k8s.yaml
Now they will continue to be installed as dependencies, but the binaries are unused (using nerdctl's cni)
[plugins."io.containerd.cri.v1.runtime".cni]
bin_dirs = ["/usr/local/libexec/cni","/opt/cni/bin"]
EDIT: We keep the legacy kubernetes-cni location of /opt/cni/bin in the PATH, because flannel uses it...
Both containerd and cri-o tell the user to install the cni plugins in /opt/cni/bin, but we use nerdctl-full.
And that installation uses libexec/cni, to keep them in the same /usr/local prefix as the other binaries.
All three projects (or four, with kubernetes-cni) are referring to the same binaries, from cni-plugins
just noticing this issue now...
In the earlier days something like kubenet / dockershim may have needed these out of the box and had some default configuration.
Yes, originally kubenet / dockershim needed the CNI loopback plugin to create the lo interface in the pod. (I think we eventually fixed it to do that itself, but...)
The existing cni/kubernetes network plugin landscape is more stable these days, and all of them AFAIK bundle their own dependencies.
They bundle their required dependencies, but half of the stock CNI plugins are add-ons that exist only to be chained with other CNI plugins (eg, bandwidth, firewall, tuning) and admins may expect those add-ons to be present and usable in a cluster.
@kubernetes/sig-network-leads Can you please confirm that
kubernetes-cnipackage (aka CNI plugins) is not needed any longer for at least most of setups?
I'm not sure if we can confirm that, but I also don't think that's the right question. Kubernetes doesn't use CNI; CRI implementations do. If users are likely to need the default CNI plugins installed, then it seems like they should be installed by the container runtimes, not by Kubernetes.
It's the job of whoever is building a distro to assemble all the components to get a working Kubernetes cluster working with their choice of CRI/CNI/CSI, etc and other components. Similar to building Legos. 😆
containerd < 2.0 used to bundle CNI plugins, but v2+ now asks users to fetch runc, containerd, and CNI plugins directly from their upstream sources.
Let's break the dependency link in 1.36 and announce that it will be removed for 1.38.
containerd < 2.0 used to bundle CNI plugins, but v2+ now asks users to fetch runc, containerd, and CNI plugins directly from their upstream sources.
Actually it asks the user to install them from Docker Inc. if you want to use packages, only tarballs are available upstream.
https://github.com/containerd/containerd/blob/main/docs/getting-started.md#option-2-from-apt-get-or-dnf
But since containerd.io does not package CNI plugins, you will have to find them somewhere else (like kubernetes-cni)
And the documentation for tarballs is constantly out-of-date, since it is not updated with the official packages.
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
"Install CNI plugins (required for most pod network):"
CNI_PLUGINS_VERSION="v1.3.0"
"Optionally install crictl (required for interaction with the Container Runtime Interface (CRI), optional for kubeadm):"
CRICTL_VERSION="v1.31.0"
But now it will be up to these 3rd-party and 2nd-party tools to do their own advertising, not part of the release.
Ok, so it seems https://github.com/containernetworking/plugins don't publish deb/rpm packages. Can we ask them to publish it themselves to OBS?
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ is showing the correct steps for tarball instructions. All we need to do is arrange for CNI project to build their own packages to somewhere and update our docs to mention installing CNI plugins from where they are published.
Let's break the dependency link in 1.36 and announce that it will be removed for 1.38.
I agree with this and I'll take on this in the coming days. /assign
Ok, so it seems https://github.com/containernetworking/plugins don't publish deb/rpm packages. Can we ask them to publish it themselves to OBS?
We can do that, but I think it's important to clarify that it's completely up to them how are they going to handle this, i.e. publish their project. That said, if they say we're not interested into having packages, that shouldn't be a blocker for us to stop building kubernetes-cni (in which case, we'll just recommend installing from the tarball).
Do we know someone from the cni-plugins projects that we can talk to about this?
/priority important-soon
/kind deprecation
Do we know someone from the cni-plugins projects that we can talk to about this?
I added an item to the CNI maintainers meeting agenda. (Next meeting is January 26 at 10:00 EST / 15:00 UTC.)
update our docs to mention installing CNI plugins from where they are published
The docs will not mention CNI at all, that is the problem for the container runtime!
So it will be covered by "container runtime", but only by linking to third parties:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime
- https://kubernetes.io/docs/setup/production-environment/container-runtimes/
Instead, the documentation will be the responsibility of each container runtime:
https://github.com/containerd/containerd/blob/main/docs/getting-started.md#step-3-installing-cni-plugins
- "Download the cni-plugins-<OS>-<ARCH>-<VERSION>.tgz archive from https://github.com/containernetworking/plugins/releases , verify its sha256sum, and extract it under /opt/cni/bin"
https://github.com/cri-o/cri-o/blob/main/install.md#setup-cni-networking
- "This tutorial will use the latest version of CNI plugins (git clone https://github.com/containernetworking/plugins) and build it from source."
And that only covers the installation and configuration of the cni-plugins programs...
(As mentioned above, it also allows for getting away from the hardcoded /opt/cni)
plugins.io.containerd.cri.v1.runtime.cni.bin_dir
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
crio.network.plugin_dirs
# Path to the directory where CNI configuration files are located.
# network_dir = "/etc/cni/net.d/"
# Paths to directories where CNI plugin binaries are located.
# plugin_dirs = [
# "/opt/cni/bin/",
# ]
So after 1.36 (or 1.38?) it is no longer something that will be installed by Kubernetes at all...
Currently it is just being pulled by that legacy dependency, used between 1.24 and 1.31 or so.
The actual CNI add-on and plugin installation and configuration is left for a later "create" step:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
- https://kubernetes.io/docs/concepts/cluster-administration/addons/#networking-and-network-policy
This is left as an exercise for the reader. "CoreDNS will not start up before a network is installed."
CNI project agrees that it would make sense for them to publish their own packages. @squeed wasn't quite sure what the best way to actually do that was, so if people have pointers, please comment on https://github.com/containernetworking/plugins/issues/1228