scream
scream copied to clipboard
Rather large difference between numbers shown
With the bandwidthtester receiver on an 100 Mbit/s ethernet link:
summary 22.1 Transmit rate = 93584kbps, PLR = 0.00%( 0.00%), RTT = 0.009s, Queue delay = 0.025s
summary 24.1 Transmit rate = 94336kbps, PLR = 0.00%( 0.00%), RTT = 0.008s, Queue delay = 0.024s
summary 26.1 Transmit rate = 93545kbps, PLR = 0.00%( 0.00%), RTT = 0.012s, Queue delay = 0.033s
The -log
CSV differs rather a lot though: the highest I get (shortly before ^C
ing) is:
- 84352424 Media coder bitrate [bps]
- 85207824 Transmitted bitrate [bps]
- 84976040 ACKed bitrate [bps]
That’s about 10 Mbit/s less! Where does this difference come from?
Looking at the code, the “Transmit rate” command line output seems to be tracked completely independently of the values that actually end up in the CSV. This doesn’t make any sense to me… shouldn’t the bandwidth be dependent on how many bytes per second you can actually get through the link?
Isn’t ACKed bitrate (plus packet overhead) the better estimate?