libvma
libvma copied to clipboard
Test throughput on a dual-port 10Gbps card in "loopback" mode
Greetings. Is there a way on Linux to test throughput on a dual-port 10Gbps ConnectX3-EN Pro card in "loopback" mode, that is, one port plugged directly into the other. With VMA enabled on each port of course?
We have several of these cards installed in servers that are deployed in the data-centre. But I never had an opportunity to get two servers for the test at once. I have one now and would like to preform such test, if possible
Eureka) I am able to perform tests in the data centre. I've recalled that 3 servers are connected to the same router
glad you solve the testing problem you had, can this ticket be close?
regarding your question... once you connect the cable between the two ports you should be able to be a link UP on both ports. But the problem is what IP address + subnet config will make it work in that mode??
The only solution i've found uses firewall rules to solve the address puzzle. Butw, It only works on kernel stack(
So while libvma doesn't support loopback testing of the card, what I did was the following:
- Connect one port of a card to another port on the same card (I have just one card and one computer :smile:)
- Use Linux's network namespaces to "jail" each port into a different namespace
- Perform a test from one network namespace to the other.
Specifically, what I had is enp3s0
(192.168.253.1) and enp3s0d1
(192.168.253.2). Using the following script you can create the namespaces (run as root):
#!/bin/bash
INTERFACE_1=enp3s0
INTERFACE_2=enp3s0d1
ip netns add ns_${INTERFACE_1}
ip netns add ns_${INTERFACE_2}
ip link set ${INTERFACE_1} netns ns_${INTERFACE_1}
ip netns exec ns_${INTERFACE_1} ip addr add dev ${INTERFACE_1} 192.168.253.1/24
ip netns exec ns_${INTERFACE_1} ip link set dev ${INTERFACE_1} up
ip link set ${INTERFACE_2} netns ns_${INTERFACE_2}
ip netns exec ns_${INTERFACE_2} ip addr add dev ${INTERFACE_2} 192.168.253.2/24
ip netns exec ns_${INTERFACE_2} ip link set dev ${INTERFACE_2} up
Great, now each interface is in a different namespace, and must "use the wire" to get to the other interface.
Testing with sockperf for the "server" part (enp3s0):
# export LD_PRELOAD=libvma.so
# export VMA_SPEC=latency
# ip netns exec ns_enp3s0 sockperf server -i 192.168.253.1
And the client (enp3s0d1):
# export LD_PRELOAD=libvma.so
# export VMA_SPEC=latency
# ip netns exec ns_enp3s0d1 sockperf ping-pong --client_ip 192.168.253.2 -t 10 -i 192.168.253.1
I was able to verify that the data is actually sent on the wire because:
- When the wire is disconnected - the test fails. I know it might not be sufficient, so;
- Looking at the ConnectX interface LEDs, it looks like data is actually passing while the test is running
Using the kernel bypass method (libvma) I get latency that is similar to the expected latency when testing between 2 computers.
Did I make a mistake? Please let me know.