k8s-rdma-shared-dev-plugin
k8s-rdma-shared-dev-plugin copied to clipboard
Does k8s rdma shared dev plugin support Ethernet network cards with link_layer?
On the host computer, I check the card information through ibstatus, and the link_layer is all Ethernet
After using k8s rdma shared dev plugin to map to pod, the following screenshot of the information can be seen through ibstatus, indicating that the Ethernet card is not correctly recognized
Is it something I configured incorrectly, or is this plugin not supporting Ethernet type cards
im not sure running ibstatus in container is a indication of a correct configuration.
rdma shared device plugin will mount rdma char devices to container (under /dev/infiniband) to use RoCE you will need a network device associated with the same NIC made available in container.
you can use multus + macvlan cni (using the above device as "master") and provide the container an additional network interface.
The ROCE mode IB network card can work normally inside the container。default gid value should come from
/sys/class/infiniband/mlx5_0/ports/1/gids/0, but effective gid_index in the container is not starting from 0.