CNI Deprecation is a Bad Decision – Request to Retain Support
Feature request description
I recently noticed in cni/README.md that the CNI backend is officially marked for removal in Podman 5.0. In my opinion, this is a bad decision that could negatively impact many users and workflows. CNI (Container Network Interface) is a presence to be reckoned with in the world of containerised technologies. It has become the standard choice for many container orchestration systems such as Kubernetes (K8s). So why can't we easily remove the CNI? Here are some key reasons.
Widespread compatibility to ensure ecosystem stability
CNI is a widely adopted standard that is not only adopted by mainstream container orchestration systems such as Kubernetes, but also plays an important role in other container platforms. If CNI is removed, users will have to shift to Netavark, a project developed and maintained primarily by Red Hat. Such a shift would reduce interoperability between different ecosystems, especially in non-Red Hat environments.
Avoid Vendor Lock-In, Keep It Open
Removing CNI may cause Podman to become too dependent on Red Hat's specific networking stack. In an open container ecosystem, we should prioritise open standards over proprietary solutions. Many organisations have built existing infrastructure and automated processes based on CNI, and removing CNI will force them to migrate unnecessarily, limiting user choice.
Podman's application scope extends far beyond the Red Hat ecosystem
Podman is not limited to OpenShift and Red Hat-based distributions. Many users are running Podman alongside Kubernetes, which requires CNI to handle networking. Abandoning CNI may leave users who rely on Podman for cross-platform container workflows feeling isolated.
Flexibility and the importance of user choice*
While Netavark may bring some advantages, users should not be forced to accept a single solution. Keeping CNI as an optional backend ensures that Podman can remain compatible with a wider range of infrastructure.
To summarise, the presence of CNI as a key component of the container network is essential to ensure compatibility between different container orchestration systems, avoid vendor lock-in, and maintain flexible options for users. As such, it should not be removed lightly.
Suggest potential solution
Please reconsider the decision to remove CNI and keep it as an optional networking backend. While Netavark can be the default, CNI should remain available to support users who rely on it for broader container orchestration compatibility. Would the maintainers be open to discussing a way to retain CNI support in Podman?
Have you considered any alternatives?
I've used docker, but docker's bootstrapping, needing root privileges to run, and having a central daemon is something I don't like, so that said I use podman
Additional context
Add any other context or screenshots about the feature request here.
Hi @vxtls
I just wanted to add some additional context to the matter by the way of two blog posts submitted by members of the core team:
- https://blog.podman.io/2023/01/podman-begins-cni-plugins-deprecation/
- https://blog.podman.io/2023/11/cni-deprecation-and-removal-from-podman-5-0/
I can't see the decision to revert the deprecation going ahead but as mentioned in the second blog post a build tag has been added for distributions that would like to continue including CNI support: https://github.com/containers/common/pull/1767
You can see it being used for FreeBSD: https://github.com/containers/podman/blob/7131cfa48ca7666e4dbd05a06ff149e922328153/Makefile#L74-L75
Raising this now is IMO to late, the decision has been made and I don't see that changing.
We already removed it in 5.0 and we (upstream) are no longer supporting it. There is still the build tag option but this is only for compatibility for RHEL 9 and freebsd. I expect the option to be removed with something like a podman 6.0.
There are many reasons why we started netavark for podman. @mheon and @baude can likely speak better to that because these issues have been around for longer than me but my understanding is:
- The plugin architecture is not great for performance. We really don't need all the flexibility of the plugins.
- Upstream CNI focuses on k8s mainly not our use cases. They are looking into a adding daemons and such which we didn't want.
- Contribution fixes/features to cni is difficult. It is a seperate project not under our direct control and of course what might make sense for us didn't necessarily make sense for them and k8s.
Avoid Vendor Lock-In, Keep It Open
I don't follow this point at all. Podman and netavark has been FOSS since they started and we always accepted community contributions and even other maintainers. In fact podman (including netavark and other tools) have been contributed to the CNCF recently to make it vendor neutral. There are (will be) written rules on how maintenance works, i.e. https://github.com/containers/podman/pull/25398. This is for everyone who wants to maintain and influence the direction of the project not just Red Hat.
While Netavark may bring some advantages, users should not be forced to accept a single solution. Keeping CNI as an optional backend ensures that Podman can remain compatible with a wider range of infrastructure.
Can you could bring up actual use cases why you would need to use CNI over netavark?
Sure user choice is nice but it doesn't help us when it is a significant maintenance cost. To me it is really not worth it to keep.
@vxtls i admire your passion for podman. the primary issue, as @Luap99 points out, was that the upstream CNI project and Podman were diverging in directions. We had very clear input from users that CNI was not sufficient and in particular in terms of parity with Docker's networking. For most, this change was an unmitigated success, however I appreciate that you see otherwise.
I think it would be best to cite use cases that the combination of netavark/aardvark does not satisfy as RFEs to netavark so that they can be considered for value and inclusion. You can also propose them as RFEs here. The more information you can provide, the easier it is for us to evaluate and prioritize them. Of course, pull requests after getting agreements on RFEs are welcome.
And finally, I assure you that the change in networking was made for the benefit of podman users and not Red Hat. There was no element of "vendor lock-in" as part of the decision.
Many users are running Podman alongside Kubernetes, which requires CNI to handle networking
You can use Podman with one network stack (netavark, host, ...) and CRI-O with another one (CNI)
They only share the images, so podman (buildah) would build the images and hand them over to cri-o.
But you don't use Podman to run k8s, just as you no longer use Docker for k8s but instead containerd.
I've used docker, but docker's bootstrapping, needing root privileges to run, and having a central daemon is something I don't like, so that said I use podman
You can use docker rootless, and you can use podman rootful. It is mostly a matter of default settings.
Many users are running Podman alongside Kubernetes, which requires CNI to handle networking
You can use Podman with one network stack (netavark, host, ...) and CRI-O with another one (CNI)
They only share the images, so podman (buildah) would build the images and hand them over to cri-o.
But you don't use Podman to run k8s, just as you no longer use Docker for k8s but instead containerd.
I've used docker, but docker's bootstrapping, needing root privileges to run, and having a central daemon is something I don't like, so that said I use podman
You can use docker rootless, and you can use podman rootful. It is mostly a matter of default settings.
So it is possible for podman to use CRI-O as a network stack?
No. CRI-O is another container engine, like Podman. CRI-O is dedicated to running Kubernetes, and as such it still supports (exclusively uses, even) CNI for networking. Podman was never intended to be used as a container runtime for Kubernetes; our focus is on single node use only. Podman doesn't use CRI-O, and CRI-O doesn't use Podman; they're separate applications doing similar things that share a lot of the same underlying libraries.
You do see Podman being used alongside CRI-O (we share an image store with them, which allows Podman to be used for simple things like preloading images, etc) or as a setup tool on Kubernetes nodes, but we're not a CRI compliant runtime that runs containers for Kubernetes. Losing CNI support is inconvenient in the Podman-on-a-K8S node usecase, but that's not most of our users, and continued support was frankly not worth the trouble. I think @baude captured the biggest reasons there; I think CNI upstream deciding to move in a direction that made it much harder for us to use it (but easier for Kubernetes to do so) was the breaking point.
You can also do other workarounds, such as using "host" network or even "none" network for Podman (for k8s). Since it is only used for building (and loading) images, the requirements are different from running containers...
At least this is how Podman is being used on minikube nodes with cri-o, as a complement to crictl (and CRI).
It is also possible to run Kubernetes nodes as containers inside Podman, both with minikube and with kind.
Neither of those use cases needs CNI.
But so far, it still runs an older version*.
* i.e Podman version 3, that still defaulted to CNI. It might still use it for version 4, if only to cut down on dependencies...
But upgrading to version 5 and beyond and including netavark/aardvark (instead of using cni) would not be big deal either
https://github.com/kubernetes/minikube/tree/master/deploy/iso/minikube-iso/package/podman
Actually cri-o is a bigger problem for minikube, since it requires a new version for each k8s version
I hope this further explains the position from the Podman Maintainers @vxtls
As we have reaffirmed, the CNI deprecation will not be reversed. Do you have any further clarifying questions before I mark this issue closed as "not planned"?
I hope this further explains the position from the Podman Maintainers @vxtls
As we have reaffirmed, the CNI deprecation will not be reversed. Do you have any further clarifying questions before I mark this issue closed as "not planned"?
OK, you can close this issue