iperf icon indicating copy to clipboard operation
iperf copied to clipboard

UDP Random Packet Size and Random Delay (Gap) Time between Packets

Open davidBar-On opened this issue 4 years ago • 5 comments

  • Version of iperf3 (or development branch, such as master or 3.1-STABLE) to which this pull request applies: master

  • Issues fixed (if any):

  • Brief description of code changes (suitable for use as a commit message):

Add support for sending random length of UDP from defined lengths range (second argument to l), and ability to add a delay before sending a packet (--gap-time min[/max]). The delay is either fixed or random taken from a specified range. These enhancements allow to create simple traffic profile, which can be important for testing packet lost, jitter and delay in low bandwidth networks (e.g. for IoT) and in unstable networks.

This is a simplified subset functionality from the rejected PR #1004. I believe that although these enhancements main purpose is to test expected behavior of different traffic profiles, and not directly to test throughput, they can still be very useful for iperf3 users.

(A note about the delay between packets. Since OS minimum sleep() time may be relatively large, the minimum sleep time is estimated. If a small delay time is required, then delay will not be used for each packet, but only for one out of some packets. E.g. if estimated minimum sleep() time is 15ms and the requested wait time is 3ms, delay will be done only for 1 packet out of 5.)

UPDATE

PR #1343 suggest a better way for the Gap delay time implementation (using select timeout instead of sleep), so this PR is relevant only for the random packet sizes part.

davidBar-On avatar Nov 22 '20 12:11 davidBar-On

@davidBar-On - Had to change this around to get it to build, which seems to work for the regular forward mode:

# src/iperf3 -c 127.0.0.1 -p 8000 -ul 100/200 -d -k 5
Minimum sleep() time is 1.078700[ms]
send_parameters:
{
        "udp":  true,
        "omit": 0,
        "time": 0,
        "blockcount":   5,
        "parallel":     1,
        "len":  100,
        "len_max":      200,
        "bandwidth":    1048576,
        "pacing_timer": 1000,
        "client_version":       "3.11"
}
Connecting to host 127.0.0.1, port 8000
SNDBUF is 212992, expecting 0
RCVBUF is 212992, expecting 0
Setting application pacing to 131072
Sending Connect message to Sockt 5
Connect received for Socket 5, sz=4, buf=39383736, i=0, max_len_wait_for_reply=4
Buffer 200 bytes
[  5] local 127.0.0.1 port 36334 connected to 127.0.0.1 port 8000
sent 182 bytes of 182 after waiting 0[ms], total 182
sent 101 bytes of 101 after waiting 0[ms], total 283
sent 130 bytes of 130 after waiting 0[ms], total 413
sent 191 bytes of 191 after waiting 0[ms], total 604
sent 101 bytes of 101 after waiting 0[ms], total 705
send_results
{
        "cpu_util_total":       20.128617363344052,
        "cpu_util_user":        10.064308681672026,
        "cpu_util_system":      10.064308681672026,
        "sender_has_retransmits":       0,
        "streams":      [{
                        "id":   1,
                        "bytes":        705,
                        "retransmits":  -1,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      5,
                        "start_time":   0,
                        "end_time":     0.005163
                }]
}
get_results
{
        "cpu_util_total":       10.260770975056689,
        "cpu_util_user":        0,
        "cpu_util_system":      10.279667422524566,
        "sender_has_retransmits":       -1,
        "streams":      [{
                        "id":   1,
                        "bytes":        705,
                        "retransmits":  -1,
                        "jitter":       5.3229675292968757e-06,
                        "errors":       0,
                        "packets":      5,
                        "start_time":   0,
                        "end_time":     0.005248
                }]
}
interval_len 0.005163 bytes_transferred 705
interval forces keep
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-0.01   sec   705 Bytes  1.09 Mbits/sec  5
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-0.01   sec   705 Bytes  1.09 Mbits/sec  0.000 ms  0/5 (0%)  sender
[  5]   0.00-0.01   sec   705 Bytes  1.07 Mbits/sec  0.005 ms  0/5 (0%)  receiver

iperf Done.

However. on reverse, the packet counts get doubled and the error rate is almost ~50+%. Did I miss something here?

# src/iperf3 -c 127.0.0.1 -p 8000 -ul 100/200 -d -k 5 -R
Minimum sleep() time is 1.078400[ms]
send_parameters:
{
        "udp":  true,
        "omit": 0,
        "time": 0,
        "blockcount":   5,
        "parallel":     1,
        "reverse":      true,
        "len":  100,
        "len_max":      200,
        "bandwidth":    1048576,
        "pacing_timer": 1000,
        "client_version":       "3.11"
}
Connecting to host 127.0.0.1, port 8000
Reverse mode, remote host 127.0.0.1 is sending
SNDBUF is 212992, expecting 0
RCVBUF is 212992, expecting 0
Setting application pacing to 131072
Sending Connect message to Sockt 5
Connect received for Socket 5, sz=4, buf=39383736, i=0, max_len_wait_for_reply=204
Buffer 200 bytes
[  5] local 127.0.0.1 port 52416 connected to 127.0.0.1 port 8000
received 200 bytes of 200, total 200
pcount 1 packet_count 0 size 200
received 200 bytes of 200, total 400
pcount 3 packet_count 1 size 200
received 200 bytes of 200, total 600
pcount 5 packet_count 3 size 200
received 200 bytes of 200, total 800
pcount 7 packet_count 5 size 200
received 200 bytes of 200, total 1000
pcount 9 packet_count 7 size 200
send_results
{
        "cpu_util_total":       12.932707861533485,
        "cpu_util_user":        0,
        "cpu_util_system":      12.932707861533485,
        "sender_has_retransmits":       -1,
        "streams":      [{
                        "id":   1,
                        "bytes":        1000,
                        "retransmits":  -1,
                        "jitter":       7.8721160888672e-06,
                        "errors":       4,
                        "packets":      9,
                        "start_time":   0,
                        "end_time":     0.011206
                }]
}
get_results
{
        "cpu_util_total":       10.392122384385441,
        "cpu_util_user":        0,
        "cpu_util_system":      10.409706347810795,
        "sender_has_retransmits":       0,
        "streams":      [{
                        "id":   1,
                        "bytes":        1534,
                        "retransmits":  -1,
                        "jitter":       7.8721160888672e-06,
                        "errors":       4,
                        "packets":      10,
                        "start_time":   0,
                        "end_time":     0.011331
                }]
}
interval_len 0.011206 bytes_transferred 1000
interval forces keep
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-0.01   sec  1000 Bytes   714 Kbits/sec  0.008 ms  4/9 (44%)
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-0.01   sec  1.50 KBytes  1.08 Mbits/sec  0.000 ms  0/10 (0%)  sender
[  5]   0.00-0.01   sec  1000 Bytes   714 Kbits/sec  0.008 ms  4/9 (44%)  receiver

iperf Done.

When not using random packet sizes, reverse mode gets the proper packet counts as well as packet loss stats.

Here's my working tree so far: https://github.com/swg0101/iperf/

swg0101 avatar Apr 17 '22 04:04 swg0101

@swg0101, I built and run my original code (more than a 1.5 years old) and it runs o.k. in both direction. I didn't have time yet to evaluate your code, but from the log I see that you merged the changes into current master. Is this correct? If correct, then there may be changes that were done after I did the changes that are causing the issues.

I assume that the server is also using the same iperf3 as the client.

In any case, I see that in reverse mode, packet size is always 200 bytes and not random between 100 and 200 as expected. Can you also get the server's size logs to see if it got the proper parameters? Did you merge all changes?

If you don't find the issue, I will try to help with the evaluation later this week.

davidBar-On avatar Apr 17 '22 17:04 davidBar-On

Hi David.

Thanks for the quick response.

Yes, I built it on master since I was trying to combine your other patches (where the same socket patch would only build on master). The server itself is running the exact same build as the client (actually the same executable, running on localhost).

I did merge all changes, but some of them had to be changed slightly (i.e. a size variable was calculated a few lines beforehand, readjust some case values already used by master, adjust some offsets, etc), but afaik all of them were merged in.

On the server, the debug output says:

get_parameters:
{
        "udp":  true,
        "omit": 0,
        "time": 0,
        "blockcount":   5,
        "parallel":     1,
        "reverse":      true,
        "len":  100,
        "len_max":      200,
        "bandwidth":    1048576,
        "pacing_timer": 1000,
        "client_version":       "3.11"
}
Accepted connection from 127.0.0.1, port 33352
SNDBUF is 212992, expecting 0
RCVBUF is 212992, expecting 0
Setting application pacing to 131072
Buffer 200 bytes
[  5] local 127.0.0.1 port 8000 connected to 127.0.0.1 port 56893
sent 188 bytes of 188 after waiting 0[ms], total 188
sent 166 bytes of 166 after waiting 0[ms], total 354
sent 151 bytes of 151 after waiting 0[ms], total 505
sent 197 bytes of 197 after waiting 0[ms], total 702
sent 124 bytes of 124 after waiting 0[ms], total 826
sent 189 bytes of 189 after waiting 0[ms], total 1015
sent 187 bytes of 187 after waiting 0[ms], total 1202
sent 113 bytes of 113 after waiting 0[ms], total 1315
sent 138 bytes of 138 after waiting 0[ms], total 1453
sent 146 bytes of 146 after waiting 0[ms], total 1599
interval_len 0.012323 bytes_transferred 1599

On the client, it says

Minimum sleep() time is 1.078300[ms]
send_parameters:
{
        "udp":  true,
        "omit": 0,
        "time": 0,
        "blockcount":   5,
        "parallel":     1,
        "reverse":      true,
        "len":  100,
        "len_max":      200,
        "bandwidth":    1048576,
        "pacing_timer": 1000,
        "client_version":       "3.11"
}
Connecting to host 127.0.0.1, port 8000
Reverse mode, remote host 127.0.0.1 is sending
SNDBUF is 212992, expecting 0
RCVBUF is 212992, expecting 0
Setting application pacing to 131072
Sending Connect message to Sockt 5
Connect received for Socket 5, sz=4, buf=39383736, i=0, max_len_wait_for_reply=204
Buffer 200 bytes
[  5] local 127.0.0.1 port 56893 connected to 127.0.0.1 port 8000
received 200 bytes of 200, total 200
pcount 1 packet_count 0 size 200
received 200 bytes of 200, total 400
pcount 3 packet_count 1 size 200
received 200 bytes of 200, total 600
pcount 5 packet_count 3 size 200
received 200 bytes of 200, total 800
pcount 7 packet_count 5 size 200
received 200 bytes of 200, total 1000
pcount 9 packet_count 7 size 200

receiving only odd number of packets and the wrong size (interestingly, when doing reverse mode, the server mode tries to send ~2x the packets but the client is only expecting half of these, marking the rest as errors). The varying sizes were confirmed on tcpdump with a sequential seq # and I did it on localhost to rule out any packet loss, although this happens on a remote server as well. I wonder if I missed a piece of the code somewhere, or other changes in master may have changed some behavior that I need to readjust for.

Thanks for your time.

swg0101 avatar Apr 17 '22 17:04 swg0101

Hi @swg0101, the problem seem to be in this line: sp->settings->blksize_max should be sp->settings->blksize (the error causes calling Nread instead of Pread).

davidBar-On avatar Apr 17 '22 18:04 davidBar-On

@davidBar-On - Excellent - thanks for catching that. Now everything seems to be working correctly.

swg0101 avatar Apr 17 '22 18:04 swg0101