gns3-server
gns3-server copied to clipboard
gns3 cluster at esxi bottleneck
Picture for illustration
Hi all, I made some benchmark tests with our gns3vm servers (2.2.40.1). It was really good, but I found one bottleneck. I used iperf3 in order to get a measurable throughput.
Linux_A to Debian PASS: ping PASS: iperf3 ~2Gbit/s
Cluster, working, but slow: Linux_A is connected directly via eth1 with GNS_Cloud_A Linux_B is connected directly via eth1 with GNS_Cloud_B
PASS: ping from Linux_A to Linux_B PASS: iperf3 ~50 Mbit/s
Cluster with bottleneck: Linux_A is connected via br1 with GNS_Cloud_A Linux_B is connected via br1 with GNS_Cloud_B
FAIL: ping
Error: prePing - arp request, is sent from Linux_A to Linux_B I am able to see the reply with tcpdump at interfaces A_br1 and A_eth1 but not at gns3tap0-0 or with wireshark at cable to Linux_A
Workaround: I set static arp addresses on both sides.
PASS: ping PASS: iperf3 about 1 Gbit/s
Question: Is there a way to configure the vm without that workaround? I monitored iptables, arptables, ebtables and different driver settings without any hint of a drop. Unicasts from Debian working fine, but every unicast frame from Linux_B to Linux_A hits A_br1 and A_eth1 but not gns3tap0-0 or at cable to Linux_A. It is simply gone ;/