enhancements
enhancements copied to clipboard
KEP-3698: Multi-Network
- One-line PR description: Multi-Network
- Issue link: #3698
This all seems a bit too abstract. KEPs traditionally propose either some Kubernetes functionality or an API object. It reads like the charter to a working group rather than a KEP.
Is there a way this can be closer to a specific proposal? I'd like to see the API types :-).
When the dust settles we need to be able to have a k8s pod do the equivalent of what docker/podman can do today. (Apologies for those on the calls who already heard all this.)
For example, using podman, since it supports the same CNI as k8s, I can use:
sudo podman run --privileged --detach --name=exampleOne --network=public,dmz,storage,private quay.io/nginx:latest
This simple example puts 4 interfaces in the container. What I want to do with them doesn't need to be a concern of the apiserver. In each case I pick plugins, and the .conflist configures them to solve my problem. What I use and how they are configured are implementation specific.
As discussed, we will need a way to pass plugin specific info (static IP, static mac etc.) to each network/.conflist Most/all use cases I've seen in the doc can use docker/podman to demonstrate.
/cc
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: mskrocki Once this PR has been reviewed and has the lgtm label, please assign thockin for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
Why all the use-cases?
As I see it, all of them are implementation specific. For the API extension, which the KEP is about, it would be sufficient with a use-case #0, like:
As an advanced network user I would like to add various network interfaces to various PODs for various reasons
I miss a detailed description of the clean-up of all PodNetwork attachments in a POD when it dies. In my experience it is always much, much harder to delete things than to add them. As for now the CRI-plugin will call the CNI-plugin with a DEL
command. With Multus, it's the responsibility of Multus to do a DEL
on all it's sub-cni-plugins.
But who will do this in K8s multi-networking? Kubelet? If so, will KNI become a prerequisite?
Will this part be skipped? If so, it will be a requirement on implementations to garbage-collect resources such as IP addresses.
My summary:
- user stories does not define clearly the user problems, we should understand better the user problem as some of them may be solved in a more Kubernetes way instead of having to replicate all the virtual network complexity in kubernetes itself.
Per example, some user stories I heard for using Pods with an additional external interface was to implement bgp against an upstream router to implement fast failover, when I asked why they will not use probes they said that it was because they were slow, but they will prefer to use probes if they can operate in subseconds, so that can be solved with https://github.com/kubernetes/enhancements/issues/3066
Another comment request that we never solved is Support port ranges or whole IPs in services #23864 , to be able to assign IPs to Pods, and then people has to add an external interface to the Pod for that.
- the list of requirements needs a more clear justification, is not easy to get the relation with the user stories
- the new network objects seems to implement some kind of logical partitioning at the network level, however, Kubernetes uses namespaces as the logical partitioning https://kubernetes.io/blog/2016/08/kubernetes-namespaces-use-cases-insights/ , This may imply we'll have a logical partition and a physical partition via the network, but the physical partition will not be easy to observer by the users, as it seems possible to have pods with multiple networks in the same namespace, this does not look some nice UX, also opens a lot of doubts about the feasibility of this change
- I miss a lot details about the overall behavior of the cluster, NetworkPolicies, Services, Webhooks, the kubernetes.default Service, DNS, kubectl exec, port-forward, the KEP has to define what will be the behavior of this core functionalities of kubernetes
- There are external dependencies, SIG Network can drive the KEP but we can not impose changes to other sigs, there are at list two SIGs: Node and Scheduling that will need to approve this changes, and I expect API Machinery and Scalability and Architecture to get involved, as webhooks and kubernetes.default at least are going to be impacted, also it seems we'll need the implementations on the container runtimes, we can not merge API objects and wait several releases for the runtimes to implement the changes
- the new network objects seems to implement some kind of logical partitioning at the network level, however, Kubernetes uses namespaces as the logical partitioning
Namespaces are not the only unit of logical partitioning in Kubernetes. For example, Pods are also partitioned by Node (Local
traffic policy) and by zone (topology-aware routing).
Pods are also partitioned by Node (
Local
traffic policy) and by zone (topology-aware routing).
at the Service level, not at the pod level
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale
- Close this PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/lifecycle frozen
@bowei: The lifecycle/frozen
label cannot be applied to Pull Requests.
In response to this:
/lifecycle frozen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten
- Close this PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten