cnf-testbed
cnf-testbed copied to clipboard
Spec: Add DANM use case to the CNF Testbed
Topic: Add DANM enabled examples to the CNF Testbed
Idea: to show Network Management w/ tenant networks and cluster networks
- manage networks in a K8s-native way
- network resource management (lower level) API
- test on Layer 3 to ensure separation
- a missing element on K8s networking
- no applications should have access to physical resources
- to replace the "SR-IOV use case" idea (rename https://github.com/cncf/cnf-testbed/milestone/36 to remove SR-IOV)
Brainstorming and use case discussion: https://docs.google.com/document/d/1VaF59CRVStx7kxH7X9Y-ltxnpzmlsqTF1PEBUEEYLkc/edit#
Reference links:
- https://github.com/nokia/danm
- https://github.com/nokia/danm/releases
- https://wiki.akraino.org/display/AK/Gerrit+Code+Repository+Overview
- https://www.nokia.com/networks/solutions/edge-cloud/
Other use case ideas:
- SR-IOV/NIC PCI device for networking to container, consumed by DPPK-type application
Some notes from our meeting on the topic on ONS EU with @taylor @lixuna @denverwilliams , @wavell and @Levovar (and later @ijw):
- We should showcase not only SR-IOV, but other features of DANM, like TenantNetworks and ClusterNetworks
- Control of the assignement of physical NIC-s to pods. See @lixuna -s notes for a better description
- Radio Access Network (RAN) / packet core use case
- DPDK
- SR-IOV
- This could be implemented by fix in the current Physical NIC GW test case which uses privileged mode due to the usage of DPKD's PMD drivers
- Equality for all interfaces
- Service discovery of non-primary interfaces over the ServiceDiscovery API
- Multiple segregated networks
The Physical NIC GW test case uses privileged mode because DPKD's PMD drivers need ti tin order to get control over the PCI devices assigned to the container. At least that's what we found with @michaelspedersen . The fact that later it exposes memif
has nothing to do with this, in fact with NSM memif interfaces are working perfectly fine without any special privileges.
@nickolaev okay, I got confused by the acronyms on the meeting. It is corrected now. Main point is that no container should have privileged mode in production.
I think it depends on the PMD. I'm not saying I'm 100% sure, but IMO e.g. the Mellanox PMD does not require privileged. Probably the Intel also does not need full privileged, just either SYS_ADMIN and/or NET_ADMIN (which is arguably not much better, but still)
To discuss multiple specs/use cases, we can also use a shared google doc (easier to collaborate vs github)
https://docs.google.com/document/d/1VaF59CRVStx7kxH7X9Y-ltxnpzmlsqTF1PEBUEEYLkc/edit#
Anyone thinks something is still missing from the spec? from my side it is complete wrt. at least use-case1, and we can always come back and adapt it
I would also propose to rename the UC to "[DANM Use Case]#1: Simulated vDU deployment with Intel physical functions" to be inline with how the NSM use-cases are tracked
If everyone agrees - I will create the UC description with an exact set of tasks and references as we discussed on the meeting!
BTW I'm thinking about creating two issues for the use-case First is about introducing the generic possibility to use DANM as a building block for any use-case The second issue would be about requirements related to deploying the test workload for this specific UC
BTW I'm thinking about creating two issues for the use-case First is about introducing the generic possibility to use DANM as a building block for any use-case The second issue would be about requirements related to deploying the test workload for this specific UC
I think it would be a good idea to split it as you mention. Having DANM as a building block will definitely be a huge benefit, and then as a follow-up having the UC(s) for a quick way of showcasing the functionality makes it easier for anyone interested in trying it out without too much effort involved.