iperf icon indicating copy to clipboard operation
iperf copied to clipboard

consistency of json vs human-readable UDP upstream tests

Open ggmartins opened this issue 3 years ago • 2 comments

Context

  • Version of iperf3:
iperf 3.9 (cJSON 1.7.13)
Linux netrics 5.4.0-1042-raspi #46-Ubuntu SMP PREEMPT Fri Jul 30 00:35:40 UTC 2021 aarch64
Optional features available: CPU affinity setting, IPv6 flow label, TCP congestion algorithm setting, sendfile / zerocopy, socket pacing
  • Hardware: Raspberry Pi 4 8GB

  • Operating system (and distribution, if any): Ubuntu 20.04.2 LTS \n \l

Bug Report

This is not necessarily a bug as we understand and we tried other channels like the indicated forum for questions. Our team member hasn't had approved yet so I'm trying this question here as our last resource. This measurement is part of a large funded study, so it's really important for us to get the correct understanding of this. Really appreciate your work on such an amazing tool like iperf3 and hope we're not taking too much of your time. For now, I'm cc'ing James Saxon's question here:

We have been running iperf3 3.10 on Ubuntu 20.04 raspberry pis, to test upstream UDP throughput to university servers. We switched from recording the human-readable format to parsing the json as: iperf3 -c oursite.edu -p <ourport> -b 1.6M -u -P 4 -t 5 -i 0 -J | jq .end.sum and then calculating: bits_per_second * (100 - lost_percent) / 100 This very reliably yields 6.15-6.20 Mbps on my dinky home connection, whereas the human-readable version quotes 6.05-6.10 Mbps.

Am I reconstructing this correctly? If so, which one should I trust? If not, is there a way to do this?

From what I can tell, the received datagrams are not included in the json output (could they be added?). I seem to get the human-readable output from the calculation,

bitrate sent * (total received - lost received ) / (total sent)

(Again, total and lost received datagrams are only in the human-readable.)

For reference, one run with JSON and one run with human-readable are below.

Thanks for your help and for the great tool!!

Jamie Saxon

{
"start": 0,
"end": 5.012857,
"seconds": 5.012857,
"bytes": 4004000,
"bits_per_second": 6406234.719144246,
"jitter_ms": 2.5463201604163417,
"lost_packets": 102,
"packets": 2860,
"lost_percent": 3.5664335664335662,
"sender": true
}
Connecting to host oursite.edu, port XX
[....]
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-5.00   sec   978 KBytes  1.60 Mbits/sec  715
[  7]   0.00-5.00   sec   978 KBytes  1.60 Mbits/sec  715
[  9]   0.00-5.00   sec   978 KBytes  1.60 Mbits/sec  715
[ 11]   0.00-5.00   sec   978 KBytes  1.60 Mbits/sec  715
[SUM]   0.00-5.00   sec  3.82 MBytes  6.41 Mbits/sec  2860
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-5.00   sec   978 KBytes  1.60 Mbits/sec  0.000 ms  0/715 (0%)  sender
[  5]   0.00-5.03   sec   965 KBytes  1.57 Mbits/sec  2.317 ms  0/706 (0%)  receiver
[  7]   0.00-5.00   sec   978 KBytes  1.60 Mbits/sec  0.000 ms  0/715 (0%)  sender
[  7]   0.00-5.03   sec   962 KBytes  1.57 Mbits/sec  3.051 ms  3/707 (0.42%)  receiver
[  9]   0.00-5.00   sec   978 KBytes  1.60 Mbits/sec  0.000 ms  0/715 (0%)  sender
[  9]   0.00-5.03   sec   942 KBytes  1.53 Mbits/sec  2.320 ms  17/706 (2.4%)  receiver
[ 11]   0.00-5.00   sec   978 KBytes  1.60 Mbits/sec  0.000 ms  0/715 (0%)  sender
[ 11]   0.00-5.03   sec   852 KBytes  1.39 Mbits/sec  2.315 ms  83/706 (12%)  receiver
[SUM]   0.00-5.00   sec  3.82 MBytes  6.41 Mbits/sec  0.000 ms  0/2860 (0%)  sender
[SUM]   0.00-5.03   sec  3.63 MBytes  6.06 Mbits/sec  2.501 ms  103/2825 (3.6%)  receiver

Note that this is closely related to:

https://groups.google.com/u/1/g/iperf-dev/c/dAfkp3VX0Mo/m/dMUGl7VOAwAJ https://github.com/esnet/iperf/commit/e255a12eb9d029f6a48a9a6e55e36d7c0921ec53

ggmartins avatar Oct 11 '21 16:10 ggmartins

I believe that you are right the the issue is because "last UDP packet can be still in flight when the test ends". About a year ago I submitted PR #1071 that should allow to wait until all packets arrive. If you are able to build iperf3, you can try this version and use the new introduced --wait-all-received option.

Although the PR code is based on older iperf3 version, this should not be an issue. Note that I was not able to test with real delays, but at least this version can show if the problem is because of not receiving the last packets. If this is the problem, then with this version the send amount of data and received amount of data plus lost packets should be exactly the same.

davidBar-On avatar Oct 12 '21 18:10 davidBar-On

Prior to iperf3.11, only the sender side results were in the JSON output as node end ... Try with iperf3.11 which includes PR 1174 and adds end.sum_sent and end.sum_received nodes to the JSON output. end.sum_received should have the right value.

TheRealDJ avatar Mar 29 '22 22:03 TheRealDJ