Client hangs when `--bitrate 0` specified with `--bytes` medium value in reverse mode
From iperf3 manual:
Setting the target bitrate to 0 will disable bitrate limits (particularly useful for UDP tests)
However, running server:
iperf3 -s
and client:
iperf3 -c 127.0.0.1 --bytes 5M -P 5 -u -R --bitrate 0
Results in client hanging, outputting empty intervals and loading CPU on 100%:
$ iperf3 -c 127.0.0.1 --bytes 5M -P 5 -u -R --bitrate 0
Connecting to host 127.0.0.1, port 5201
Reverse mode, remote host 127.0.0.1 is sending
[ 5] local 127.0.0.1 port 51746 connected to 127.0.0.1 port 5201
[ 7] local 127.0.0.1 port 58486 connected to 127.0.0.1 port 5201
[ 9] local 127.0.0.1 port 40853 connected to 127.0.0.1 port 5201
[ 11] local 127.0.0.1 port 49095 connected to 127.0.0.1 port 5201
[ 13] local 127.0.0.1 port 41086 connected to 127.0.0.1 port 5201
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-1.00 sec 992 KBytes 8.12 Mbits/sec 0.015 ms 0/31 (0%)
[ 7] 0.00-1.00 sec 224 KBytes 1.83 Mbits/sec 0.004 ms 0/7 (0%)
[ 9] 0.00-1.00 sec 352 KBytes 2.88 Mbits/sec 0.103 ms 0/11 (0%)
[ 11] 0.00-1.00 sec 512 KBytes 4.19 Mbits/sec 0.014 ms 5/21 (24%)
[ 13] 0.00-1.00 sec 576 KBytes 4.72 Mbits/sec 0.010 ms 0/18 (0%)
[SUM] 0.00-1.00 sec 2.59 MBytes 21.7 Mbits/sec 0.029 ms 5/88 (5.7%)
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 1.00-2.00 sec 0.00 Bytes 0.00 bits/sec 0.015 ms 0/0 (0%)
[ 7] 1.00-2.00 sec 0.00 Bytes 0.00 bits/sec 0.004 ms 0/0 (0%)
[ 9] 1.00-2.00 sec 0.00 Bytes 0.00 bits/sec 0.103 ms 0/0 (0%)
[ 11] 1.00-2.00 sec 0.00 Bytes 0.00 bits/sec 0.014 ms 0/0 (0%)
[ 13] 1.00-2.00 sec 0.00 Bytes 0.00 bits/sec 0.010 ms 0/0 (0%)
[SUM] 1.00-2.00 sec 0.00 Bytes 0.00 bits/sec 0.029 ms 0/0 (0%)
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 2.00-3.00 sec 0.00 Bytes 0.00 bits/sec 0.015 ms 0/0 (0%)
[ 7] 2.00-3.00 sec 0.00 Bytes 0.00 bits/sec 0.004 ms 0/0 (0%)
[ 9] 2.00-3.00 sec 0.00 Bytes 0.00 bits/sec 0.103 ms 0/0 (0%)
[ 11] 2.00-3.00 sec 0.00 Bytes 0.00 bits/sec 0.014 ms 0/0 (0%)
[ 13] 2.00-3.00 sec 0.00 Bytes 0.00 bits/sec 0.010 ms 0/0 (0%)
[SUM] 2.00-3.00 sec 0.00 Bytes 0.00 bits/sec 0.029 ms 0/0 (0%)
- - - - - - - - - - - - - - - - - - - - - - - - -
[ 5] 3.00-4.00 sec 0.00 Bytes 0.00 bits/sec 0.015 ms 0/0 (0%)
[ 7] 3.00-4.00 sec 0.00 Bytes 0.00 bits/sec 0.004 ms 0/0 (0%)
[ 9] 3.00-4.00 sec 0.00 Bytes 0.00 bits/sec 0.103 ms 0/0 (0%)
[ 11] 3.00-4.00 sec 0.00 Bytes 0.00 bits/sec 0.014 ms 0/0 (0%)
[ 13] 3.00-4.00 sec 0.00 Bytes 0.00 bits/sec 0.010 ms 0/0 (0%)
[SUM] 3.00-4.00 sec 0.00 Bytes 0.00 bits/sec 0.029 ms 0/0 (0%)
The problem is that the test termination is done by the client. On the other hand, the server stops sending after sending the 5MB. Because some packets are lost, the client never stops the test, as it never gets the 5MB. Same issue is true for --blockcount.
There are two ways to solve the problem for Reverse (-R) UDP tests:
- Do not allow to set either
--bytesor--blockcount(so they can be set only for TCP or for non-reverse UDP tests). - Change the definition of these option to limit received bytes/blocks, instead of sent bytes/blocks.
Personally I am in favor of option 1, as it is easier to implement and I don't think that the reduced functionality (not allowing Reverse UDP tests that are limited by bytes/blocks) is a major issue. What do you think? Do you have a use case that does require this functionality?