CUBIC RFC 9438
This initial draft PR just scopes the update from our implementation to RFC 9438 in code comments, updates RFC references and doesn't change any code yet.
I mostly did this for myself to scope the update and check what changed, but thought I might upload it as a draft already so people can give their input if I might've missed something or need some additional context somewhere.
Notes about what changed are prefixed with the keyword UPDATE and open questions that I still need to look into or gather feedback on with the keyword QUESTION.
I'll have to focus on some other things the next weeks, but will chip away on this on the side.
Closes #1912
Benchmark results
Performance differences relative to 27a42e41ac05a2a443503c6a3989e185bc173eae.
1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client: :green_heart: Performance has improved.
time: [642.75 ms 643.55 ms 644.43 ms]
thrpt: [155.18 MiB/s 155.39 MiB/s 155.58 MiB/s]
change:
time: [−2.0415% −1.8636% −1.6761%] (p = 0.00 +1.8990% +2.0840%]
Found 4 outliers among 100 measurements (4.00%)
2 (2.00%) high mild
2 (2.00%) high severe
1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client: Change within noise threshold.
time: [293.66 ms 295.29 ms 296.94 ms]
thrpt: [33.677 Kelem/s 33.865 Kelem/s 34.053 Kelem/s]
change:
time: [−1.5914% −0.8328% −0.1070%] (p = 0.03 +0.8398% +1.6172%]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client: No change in performance detected.
time: [26.972 ms 27.148 ms 27.355 ms]
thrpt: [36.557 elem/s 36.835 elem/s 37.076 elem/s]
change:
time: [−0.8144% +0.0412% +0.9374%] (p = 0.93 > 0.05)
thrpt: [−0.9287% −0.0412% +0.8211%]
Found 7 outliers among 100 measurements (7.00%)
1 (1.00%) high mild
6 (6.00%) high severe
1-conn/1-100mb-req/mtu-1504 (aka. Upload)/client: :green_heart: Performance has improved.
time: [648.15 ms 649.17 ms 650.19 ms]
thrpt: [153.80 MiB/s 154.04 MiB/s 154.29 MiB/s]
change:
time: [−27.103% −26.092% −24.939%] (p = 0.00 +35.303% +37.179%]
decode 4096 bytes, mask ff: No change in performance detected.
time: [11.793 µs 11.813 µs 11.840 µs]
change: [−0.4170% +0.0330% +0.4947%] (p = 0.89 > 0.05)
Found 15 outliers among 100 measurements (15.00%)
5 (5.00%) low severe
3 (3.00%) low mild
1 (1.00%) high mild
6 (6.00%) high severe
decode 1048576 bytes, mask ff: No change in performance detected.
time: [3.0200 ms 3.0277 ms 3.0371 ms]
change: [−0.5989% −0.1516% +0.2839%] (p = 0.51 > 0.05)
Found 10 outliers among 100 measurements (10.00%)
3 (3.00%) low mild
7 (7.00%) high severe
decode 4096 bytes, mask 7f: No change in performance detected.
time: [19.950 µs 20.002 µs 20.059 µs]
change: [−0.5502% −0.1417% +0.2375%] (p = 0.49 > 0.05)
Found 16 outliers among 100 measurements (16.00%)
3 (3.00%) low mild
1 (1.00%) high mild
12 (12.00%) high severe
decode 1048576 bytes, mask 7f: No change in performance detected.
time: [5.0383 ms 5.0499 ms 5.0631 ms]
change: [−0.2712% +0.0699% +0.4035%] (p = 0.70 > 0.05)
Found 16 outliers among 100 measurements (16.00%)
1 (1.00%) high mild
15 (15.00%) high severe
decode 4096 bytes, mask 3f: No change in performance detected.
time: [8.2574 µs 8.2864 µs 8.3219 µs]
change: [−0.9440% −0.3638% +0.1339%] (p = 0.20 > 0.05)
Found 18 outliers among 100 measurements (18.00%)
8 (8.00%) low mild
4 (4.00%) high mild
6 (6.00%) high severe
decode 1048576 bytes, mask 3f: No change in performance detected.
time: [1.5853 ms 1.5909 ms 1.5978 ms]
change: [−1.7097% −0.5437% +0.3525%] (p = 0.35 > 0.05)
Found 8 outliers among 100 measurements (8.00%)
1 (1.00%) high mild
7 (7.00%) high severe
1000 streams of 1 bytes/multistream: Change within noise threshold.
time: [28.771 ns 28.981 ns 29.195 ns]
change: [+0.9033% +2.2053% +3.4484%] (p = 0.00 Found 24 outliers among 500 measurements (4.80%)
21 (4.20%) high mild
3 (0.60%) high severe1000 streams of 1000 bytes/multistream: No change in performance detected.
time: [29.121 ns 29.363 ns 29.615 ns]
change: [−0.0932% +1.1994% +2.5959%] (p = 0.08 > 0.05)
Found 7 outliers among 500 measurements (1.40%)
6 (1.20%) high mild
1 (0.20%) high severe
coalesce_acked_from_zero 1+1 entries: Change within noise threshold.
time: [88.071 ns 88.422 ns 88.770 ns]
change: [−2.1323% −1.0008% −0.1692%] (p = 0.03 Found 10 outliers among 100 measurements (10.00%)
8 (8.00%) high mild
2 (2.00%) high severecoalesce_acked_from_zero 3+1 entries: No change in performance detected.
time: [105.81 ns 106.19 ns 106.59 ns]
change: [−1.1737% −0.5875% +0.1189%] (p = 0.07 > 0.05)
Found 16 outliers among 100 measurements (16.00%)
1 (1.00%) high mild
15 (15.00%) high severe
coalesce_acked_from_zero 10+1 entries: No change in performance detected.
time: [105.11 ns 105.53 ns 106.03 ns]
change: [−1.3562% −0.5915% +0.0384%] (p = 0.09 > 0.05)
Found 16 outliers among 100 measurements (16.00%)
5 (5.00%) low severe
3 (3.00%) low mild
1 (1.00%) high mild
7 (7.00%) high severe
coalesce_acked_from_zero 1000+1 entries: No change in performance detected.
time: [88.607 ns 88.725 ns 88.863 ns]
change: [−1.8062% −0.7793% +0.3158%] (p = 0.16 > 0.05)
Found 11 outliers among 100 measurements (11.00%)
4 (4.00%) high mild
7 (7.00%) high severe
RxStreamOrderer::inbound_frame(): Change within noise threshold.
time: [107.21 ms 107.27 ms 107.34 ms]
change: [+0.0909% +0.1770% +0.2639%] (p = 0.00 Found 19 outliers among 100 measurements (19.00%)
2 (2.00%) low severe
10 (10.00%) low mild
6 (6.00%) high mild
1 (1.00%) high severesent::Packets::take_ranges: No change in performance detected.
time: [8.0138 µs 8.2177 µs 8.4022 µs]
change: [−1.1014% +5.0310% +14.475%] (p = 0.29 > 0.05)
Found 20 outliers among 100 measurements (20.00%)
4 (4.00%) low severe
12 (12.00%) low mild
3 (3.00%) high mild
1 (1.00%) high severe
transfer/pacing-false/varying-seeds: Change within noise threshold.
time: [34.716 ms 34.780 ms 34.845 ms]
change: [−1.4431% −1.1736% −0.9013%] (p = 0.00 Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mildtransfer/pacing-true/varying-seeds: Change within noise threshold.
time: [35.259 ms 35.363 ms 35.468 ms]
change: [−1.1706% −0.7723% −0.3650%] (p = 0.00 Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mildtransfer/pacing-false/same-seed: No change in performance detected.
time: [34.696 ms 34.746 ms 34.795 ms]
change: [−0.4643% −0.2104% +0.0365%] (p = 0.10 > 0.05)
transfer/pacing-true/same-seed: Change within noise threshold.
time: [36.501 ms 36.584 ms 36.664 ms]
change: [+0.4899% +0.7886% +1.0926%] (p = 0.00 Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) low mildClient/server transfer results
Performance differences relative to 27a42e41ac05a2a443503c6a3989e185bc173eae.
Transfer of 33554432 bytes over loopback, min. 100 runs. All unit-less numbers are in milliseconds.
| Client vs. server (params) | Mean ± σ | Min | Max | MiB/s ± σ | Δ main |
Δ main |
|---|---|---|---|---|---|---|
| google vs. google | 450.7 ± 4.2 | 445.1 | 465.5 | 71.0 ± 7.6 | ||
| google vs. neqo (cubic, paced) | 316.5 ± 4.4 | 306.6 | 324.7 | 101.1 ± 7.3 | :green_heart: -3.1 | -1.0% |
| msquic vs. msquic | 127.4 ± 18.9 | 109.6 | 209.5 | 251.1 ± 1.7 | ||
| msquic vs. neqo (cubic, paced) | 273.2 ± 25.3 | 247.9 | 411.8 | 117.1 ± 1.3 | -2.5 | -0.9% |
| neqo vs. google (cubic, paced) | 749.5 ± 6.9 | 693.9 | 763.8 | 42.7 ± 4.6 | -0.3 | -0.0% |
| neqo vs. msquic (cubic, paced) | 155.1 ± 4.8 | 146.0 | 162.9 | 206.4 ± 6.7 | 0.1 | 0.1% |
| neqo vs. neqo (cubic) | 214.0 ± 5.2 | 203.6 | 238.1 | 149.5 ± 6.2 | :green_heart: -3.1 | -1.4% |
| neqo vs. neqo (cubic, paced) | 215.9 ± 4.4 | 206.2 | 224.5 | 148.2 ± 7.3 | -0.9 | -0.4% |
| neqo vs. neqo (reno) | 211.7 ± 4.8 | 202.7 | 231.9 | 151.1 ± 6.7 | -0.6 | -0.3% |
| neqo vs. neqo (reno, paced) | 211.8 ± 4.3 | 203.4 | 223.6 | 151.1 ± 7.4 | :green_heart: -3.9 | -1.8% |
| neqo vs. quiche (cubic, paced) | 192.3 ± 4.6 | 186.2 | 210.0 | 166.4 ± 7.0 | :green_heart: -2.5 | -1.3% |
| neqo vs. s2n (cubic, paced) | 221.3 ± 4.0 | 211.5 | 226.6 | 144.6 ± 8.0 | -0.1 | -0.1% |
| quiche vs. neqo (cubic, paced) | 733.5 ± 169.0 | 430.2 | 966.4 | 43.6 ± 0.2 | :broken_heart: 79.8 | 12.2% |
| quiche vs. quiche | 146.4 ± 4.4 | 138.4 | 157.2 | 218.5 ± 7.3 | ||
| s2n vs. neqo (cubic, paced) | 302.7 ± 13.0 | 274.2 | 338.9 | 105.7 ± 2.5 | 1.0 | 0.3% |
| s2n vs. s2n | 245.1 ± 22.4 | 232.1 | 351.1 | 130.5 ± 1.4 |
Download data for profiler.firefox.com or download performance comparison data.
Also see https://github.com/cloudflare/quiche/blob/master/quiche/src/recovery/congestion/cubic.rs
I kinda wonder if it would make sense to have a standalone Rust crate implementing these CCAs for sharing between QUIC implementations...
Also see cloudflare/quiche@
master/quiche/src/recovery/congestion/cubic.rsI kinda wonder if it would make sense to have a standalone Rust crate implementing these CCAs for sharing between QUIC implementations...
Thanks for the link!
That'd be nice, though at this point I'd wonder if anybody would actually end up using it instead of iterating on theirs. I guess making a standalone crate fit into the existing implementations would also be a lot of work.
And more people implementing the CCAs means more people able to give feedback to the RFCs/drafts.
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | t-linux64-ms-280 |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result nanoseconds (ns) (Result Δ%) | Upper Boundary nanoseconds (ns) (Limit %) |
|---|---|---|---|
| 1-conn/1-100mb-req/mtu-1504 (aka. Upload)/client | 📈 view plot 🚷 view threshold | 671,430,000.00 ns(+2.73%)Baseline: 653,567,272.73 ns | 680,716,938.76 ns (98.64%) |
| 1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client | 📈 view plot 🚷 view threshold | 647,950,000.00 ns(+2.28%)Baseline: 633,491,818.18 ns | 650,362,722.84 ns (99.63%) |
| 1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client | 📈 view plot 🚷 view threshold | 26,950,000.00 ns(-0.91%)Baseline: 27,196,727.27 ns | 27,458,590.44 ns (98.15%) |
| 1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client | 📈 view plot 🚷 view threshold | 304,560,000.00 ns(-1.46%)Baseline: 309,061,818.18 ns | 313,630,258.01 ns (97.11%) |
| 1000 streams of 1 bytes/multistream | 📈 view plot 🚷 view threshold | 39.98 ns(-14.73%)Baseline: 46.89 ns | 61.07 ns (65.47%) |
| 1000 streams of 1000 bytes/multistream | 📈 view plot 🚷 view threshold | 40.19 ns(-13.41%)Baseline: 46.41 ns | 58.72 ns (68.43%) |
| RxStreamOrderer::inbound_frame() | 📈 view plot 🚷 view threshold | 110,510,000.00 ns(-0.15%)Baseline: 110,671,818.18 ns | 111,460,676.72 ns (99.15%) |
| SentPackets::take_ranges | 📈 view plot 🚷 view threshold | 8,093.20 ns(+1.80%)Baseline: 7,950.24 ns | 8,114.94 ns (99.73%) |
| coalesce_acked_from_zero 1+1 entries | 📈 view plot 🚷 view threshold | 88.76 ns(+0.11%)Baseline: 88.67 ns | 89.36 ns (99.33%) |
| coalesce_acked_from_zero 10+1 entries | 📈 view plot 🚷 view threshold | 105.60 ns(-0.19%)Baseline: 105.80 ns | 106.91 ns (98.78%) |
| coalesce_acked_from_zero 1000+1 entries | 📈 view plot 🚷 view threshold | 88.79 ns(-0.35%)Baseline: 89.10 ns | 90.28 ns (98.36%) |
| coalesce_acked_from_zero 3+1 entries | 📈 view plot 🚷 view threshold | 105.98 ns(-0.45%)Baseline: 106.45 ns | 107.71 ns (98.39%) |
| decode 1048576 bytes, mask 3f | 📈 view plot 🚷 view threshold | 1,593,700.00 ns(+0.10%)Baseline: 1,592,054.55 ns | 1,597,217.51 ns (99.78%) |
| decode 1048576 bytes, mask 7f | 📈 view plot 🚷 view threshold | 5,059,800.00 ns(+0.03%)Baseline: 5,058,463.64 ns | 5,061,146.51 ns (99.97%) |
| decode 1048576 bytes, mask ff | 📈 view plot 🚷 view threshold | 3,029,400.00 ns(-0.04%)Baseline: 3,030,709.09 ns | 3,036,734.84 ns (99.76%) |
| decode 4096 bytes, mask 3f | 📈 view plot 🚷 view threshold | 8,319.80 ns(+0.31%)Baseline: 8,293.89 ns | 8,345.59 ns (99.69%) |
| decode 4096 bytes, mask 7f | 📈 view plot 🚷 view threshold | 20,003.00 ns(+0.11%)Baseline: 19,980.91 ns | 20,025.43 ns (99.89%) |
| decode 4096 bytes, mask ff | 📈 view plot 🚷 view threshold | 11,851.00 ns(+0.09%)Baseline: 11,840.27 ns | 11,872.63 ns (99.82%) |
| transfer/pacing-false/same-seed | 📈 view plot 🚷 view threshold | 35,190,000.00 ns(+2.18%)Baseline: 34,438,090.91 ns | 35,710,428.08 ns (98.54%) |
| transfer/pacing-false/varying-seeds | 📈 view plot 🚷 view threshold | 35,617,000.00 ns(+2.64%)Baseline: 34,702,363.64 ns | 36,206,982.71 ns (98.37%) |
| transfer/pacing-true/same-seed | 📈 view plot 🚷 view threshold | 37,042,000.00 ns(+2.25%)Baseline: 36,227,181.82 ns | 37,875,037.84 ns (97.80%) |
| transfer/pacing-true/varying-seeds | 📈 view plot 🚷 view threshold | 36,434,000.00 ns(+2.30%)Baseline: 35,614,818.18 ns | 36,998,060.92 ns (98.48%) |
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | t-linux64-ms-280 |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| s2n vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 315.63 ms(+1.22%)Baseline: 311.83 ms | 337.04 ms (93.65%) |
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | t-linux64-ms-278 |
Click to view all benchmark results
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | t-linux64-ms-278 |
Click to view all benchmark results
| Benchmark | Latency | milliseconds (ms) |
|---|---|---|
| s2n vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 302.71 ms |
Codecov Report
:x: Patch coverage is 98.43750% with 1 line in your changes missing coverage. Please review.
:white_check_mark: Project coverage is 93.10%. Comparing base (160f416) to head (93be0c8).
:warning: Report is 66 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #2535 +/- ##
==========================================
- Coverage 95.48% 93.10% -2.39%
==========================================
Files 120 120
Lines 34957 34952 -5
Branches 34957 34952 -5
==========================================
- Hits 33379 32542 -837
- Misses 1538 1559 +21
- Partials 40 851 +811
| Components | Coverage Δ | |
|---|---|---|
| neqo-common | 97.23% <ø> (-0.91%) |
:arrow_down: |
| neqo-crypto | 83.27% <ø> (-7.28%) |
:arrow_down: |
| neqo-http3 | 92.55% <ø> (-2.00%) |
:arrow_down: |
| neqo-qpack | 94.14% <ø> (-2.09%) |
:arrow_down: |
| neqo-transport | 94.39% <98.43%> (-2.12%) |
:arrow_down: |
| neqo-udp | 80.00% <ø> (-11.22%) |
:arrow_down: |
| mtu | 85.57% <ø> (-1.93%) |
:arrow_down: |
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | On-prem |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result nanoseconds (ns) (Result Δ%) | Upper Boundary nanoseconds (ns) (Limit %) |
|---|---|---|---|
| 1-conn/1-100mb-req/mtu-1504 (aka. Upload)/client | 📈 view plot 🚷 view threshold | 212,280,000.00 ns(+0.58%)Baseline: 211,064,000.00 ns | 215,408,429.98 ns (98.55%) |
| 1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client | 📈 view plot 🚷 view threshold | 207,370,000.00 ns(+0.12%)Baseline: 207,130,000.00 ns | 210,639,431.81 ns (98.45%) |
| 1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client | 📈 view plot 🚷 view threshold | 28,243,000.00 ns(-0.16%)Baseline: 28,288,800.00 ns | 28,685,979.25 ns (98.46%) |
| 1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client | 📈 view plot 🚷 view threshold | 295,870,000.00 ns(+0.06%)Baseline: 295,694,000.00 ns | 304,732,165.26 ns (97.09%) |
| 1-streams/each-1000-bytes/simulated-time | 📈 view plot 🚷 view threshold | 116,790,000.00 ns(+0.07%)Baseline: 116,704,000.00 ns | 116,916,752.72 ns (99.89%) |
| 1-streams/each-1000-bytes/wallclock-time | 📈 view plot 🚷 view threshold | 612,990.00 ns(-0.16%)Baseline: 613,996.00 ns | 620,819.09 ns (98.74%) |
| 1000-streams/each-1-bytes/simulated-time | 📈 view plot 🚷 view threshold | 14,985,000,000.00 ns(-0.01%)Baseline: 14,986,400,000.00 ns | 14,997,155,762.43 ns (99.92%) |
| 1000-streams/each-1-bytes/wallclock-time | 📈 view plot 🚷 view threshold | 14,656,000.00 ns(-0.27%)Baseline: 14,696,400.00 ns | 14,952,693.39 ns (98.02%) |
| 1000-streams/each-1000-bytes/simulated-time | 📈 view plot 🚷 view threshold | 18,178,000,000.00 ns(-2.93%)Baseline: 18,725,800,000.00 ns | 19,781,786,301.95 ns (91.89%) |
| 1000-streams/each-1000-bytes/wallclock-time | 📈 view plot 🚷 view threshold | 57,230,000.00 ns(+1.12%)Baseline: 56,596,600.00 ns | 58,293,060.54 ns (98.18%) |
| RxStreamOrderer::inbound_frame() | 📈 view plot 🚷 view threshold | 107,120,000.00 ns(-0.55%)Baseline: 107,710,000.00 ns | 109,827,340.26 ns (97.53%) |
| coalesce_acked_from_zero 1+1 entries | 📈 view plot 🚷 view threshold | 88.67 ns(+0.25%)Baseline: 88.45 ns | 89.13 ns (99.48%) |
| coalesce_acked_from_zero 10+1 entries | 📈 view plot 🚷 view threshold | 106.08 ns(+0.11%)Baseline: 105.96 ns | 106.64 ns (99.48%) |
| coalesce_acked_from_zero 1000+1 entries | 📈 view plot 🚷 view threshold | 88.82 ns(-1.36%)Baseline: 90.05 ns | 97.06 ns (91.51%) |
| coalesce_acked_from_zero 3+1 entries | 📈 view plot 🚷 view threshold | 106.97 ns(+0.47%)Baseline: 106.47 ns | 107.49 ns (99.52%) |
| decode 1048576 bytes, mask 3f | 📈 view plot 🚷 view threshold | 1,595,900.00 ns(+0.12%)Baseline: 1,593,980.00 ns | 1,604,364.93 ns (99.47%) |
| decode 1048576 bytes, mask 7f | 📈 view plot 🚷 view threshold | 5,061,900.00 ns(-0.04%)Baseline: 5,063,780.00 ns | 5,086,747.78 ns (99.51%) |
| decode 1048576 bytes, mask ff | 📈 view plot 🚷 view threshold | 3,040,800.00 ns(+0.12%)Baseline: 3,037,040.00 ns | 3,050,960.50 ns (99.67%) |
| decode 4096 bytes, mask 3f | 📈 view plot 🚷 view threshold | 8,311.50 ns(+0.10%)Baseline: 8,303.46 ns | 8,339.99 ns (99.66%) |
| decode 4096 bytes, mask 7f | 📈 view plot 🚷 view threshold | 19,976.00 ns(-0.21%)Baseline: 20,017.40 ns | 20,134.09 ns (99.21%) |
| decode 4096 bytes, mask ff | 📈 view plot 🚷 view threshold | 11,854.00 ns(+0.02%)Baseline: 11,851.60 ns | 11,928.48 ns (99.38%) |
| sent::Packets::take_ranges | 📈 view plot 🚷 view threshold | 4,817.70 ns(+0.36%)Baseline: 4,800.36 ns | 4,847.86 ns (99.38%) |
| transfer/pacing-false/same-seed/simulated-time/run | 📈 view plot 🚷 view threshold | 26,955,000,000.00 ns(+4.67%)Baseline: 25,753,000,000.00 ns | 31,672,471,724.31 ns (85.11%) |
| transfer/pacing-false/same-seed/wallclock-time/run | 📈 view plot 🚷 view threshold | 26,666,000.00 ns(+2.22%)Baseline: 26,086,666.67 ns | 29,229,960.41 ns (91.23%) |
| transfer/pacing-false/varying-seeds/simulated-time/run | 📈 view plot 🚷 view threshold | 25,405,000,000.00 ns(+0.68%)Baseline: 25,233,666,666.67 ns | 26,077,429,413.84 ns (97.42%) |
| transfer/pacing-false/varying-seeds/wallclock-time/run | 📈 view plot 🚷 view threshold | 26,304,000.00 ns(+1.05%)Baseline: 26,031,666.67 ns | 27,441,610.56 ns (95.85%) |
| transfer/pacing-true/same-seed/simulated-time/run | 📈 view plot 🚷 view threshold | 24,946,000,000.00 ns(-1.69%)Baseline: 25,374,000,000.00 ns | 27,481,765,306.16 ns (90.77%) |
| transfer/pacing-true/same-seed/wallclock-time/run | 📈 view plot 🚷 view threshold | 25,865,000.00 ns(-3.52%)Baseline: 26,808,666.67 ns | 31,686,311.05 ns (81.63%) |
| transfer/pacing-true/varying-seeds/simulated-time/run | 📈 view plot 🚷 view threshold | 25,153,000,000.00 ns(+0.41%)Baseline: 25,051,333,333.33 ns | 25,557,860,883.79 ns (98.42%) |
| transfer/pacing-true/varying-seeds/wallclock-time/run | 📈 view plot 🚷 view threshold | 26,719,000.00 ns(-0.46%)Baseline: 26,841,666.67 ns | 27,594,490.54 ns (96.83%) |
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | On-prem |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| s2n vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 169.72 ms(-1.71%)Baseline: 172.66 ms | 176.93 ms (95.92%) |
I've added a test for the dynamic changing of thealpha value.
There are some CI failures currently, most of them due to other causes I think (Sanitize is addressed in #2954, the netbsd failures seem to be due to their cdn being down).
For me the transfer bench failure (https://github.com/mozilla/neqo/pull/2535#issuecomment-3265896667) seems to be most concerning. We see a regression only on pacing-false + same-seed. Could it be that the regression is just due to seed-luck? varying-seeds passes for the same benches. I need some input here though, as I don't know the exact intricacies of the simulator regarding seeding.
@mxinden could you take another look? I'm hoping we can push CUBIC along this week. Let me know if I can help in any way with making review easier or can do anything to further de-risk. We can also go over changes in person/a call this week if it helps.
I think this PR is becoming unwieldy - it touches too many things. Suggest to split out smaller pieces that can be merged now, and maybe using a tracking issue as an overview?
I think this PR is becoming unwieldy - it touches too many things. Suggest to split out smaller pieces that can be merged now, and maybe using a tracking issue as an overview?
@larseggert I agree this is becoming unwieldy, splitting is an excellent idea, thanks. I'll go do that now.
Failed Interop Tests
QUIC Interop Runner, client vs. server, differences relative to 67ad82d7ae7a72e67034ab285854522918a8b0af.
neqo-latest as client
- neqo-latest vs. go-x-net: BP BA
- neqo-latest vs. haproxy: L1 C1 BP BA
- neqo-latest vs. kwik: BP BA
- neqo-latest vs. linuxquic: L1 C1
- neqo-latest vs. lsquic: run cancelled after 20 min
- neqo-latest vs. msquic: R Z A L1 C1
- neqo-latest vs. mvfst: A :rocket:~~L1~~ C1
- neqo-latest vs. neqo: A
- neqo-latest vs. neqo-latest: A
- neqo-latest vs. nginx: BP BA
- neqo-latest vs. ngtcp2: E :rocket:~~L1~~ CM
- neqo-latest vs. picoquic: run cancelled after 20 min
- neqo-latest vs. quic-go: A
- neqo-latest vs. quiche: BP BA
- neqo-latest vs. quinn: A :rocket:~~L1~~
- neqo-latest vs. s2n-quic: E :warning:BP BA CM
- neqo-latest vs. tquic: S :rocket:~~A~~ BP BA
- neqo-latest vs. xquic: A L1 :rocket:~~L2~~ C1
neqo-latest as server
- aioquic vs. neqo-latest: CM
- go-x-net vs. neqo-latest: CM
- kwik vs. neqo-latest: BP BA CM
- lsquic vs. neqo-latest: :rocket:~~C1~~ :warning:L1
- msquic vs. neqo-latest: U CM
- mvfst vs. neqo-latest: Z A L1 C1 CM
- neqo vs. neqo-latest: A
- openssl vs. neqo-latest: LR M A CM
- quic-go vs. neqo-latest: CM
- quiche vs. neqo-latest: run cancelled after 20 min
- quinn vs. neqo-latest: V2 CM
- s2n-quic vs. neqo-latest: CM
- tquic vs. neqo-latest: CM
- xquic vs. neqo-latest: M CM
All results
Succeeded Interop Tests
QUIC Interop Runner, client vs. server
neqo-latest as client
- neqo-latest vs. aioquic: H DC LR C20 M S R Z 3 B U A L1 L2 C1 C2 6 V2 BP BA
- neqo-latest vs. go-x-net: H DC LR M B U A L2 C2 6
- neqo-latest vs. haproxy: H DC LR C20 M S R Z 3 B U A L2 C2 6 V2
- neqo-latest vs. kwik: H DC LR C20 M S R Z 3 B U A L1 L2 C1 C2 6 V2
- neqo-latest vs. linuxquic: H DC LR C20 M S R Z 3 B U E A L2 C2 6 V2 BP BA CM
- neqo-latest vs. msquic: H DC LR C20 M S B U L2 C2 6 V2 BP BA
- neqo-latest vs. mvfst: H DC LR M R Z 3 B U :rocket:~~L1~~ L2 C2 6 BP BA
- neqo-latest vs. neqo: H DC LR C20 M S R Z 3 B U E L1 L2 C1 C2 6 V2 BP BA CM
- neqo-latest vs. neqo-latest: H DC LR C20 M S R Z 3 B U E L1 L2 C1 C2 6 V2 BP BA CM
- neqo-latest vs. nginx: H DC LR C20 M S R Z 3 B U A L1 L2 C1 C2 6
- neqo-latest vs. ngtcp2: H DC LR C20 M S R Z 3 B U A :rocket:~~L1~~ L2 C1 C2 6 V2 BP BA
- neqo-latest vs. quic-go: H DC LR C20 M S R Z 3 B U L1 L2 C1 C2 6 BP BA
- neqo-latest vs. quiche: H DC LR C20 M S R Z 3 B U A L1 L2 C1 C2 6
- neqo-latest vs. quinn: H DC LR C20 M S R Z 3 B U E :rocket:~~L1~~ L2 C1 C2 6 BP BA
- neqo-latest vs. s2n-quic: H DC LR C20 M S R 3 B U A L1 L2 C1 C2 6 :warning:BP
- neqo-latest vs. tquic: H DC LR C20 M R Z 3 B U :rocket:~~A~~ L1 L2 C1 C2 6
- neqo-latest vs. xquic: H DC LR C20 M R Z 3 B U :rocket:~~L2~~ C2 6 BP BA
neqo-latest as server
- aioquic vs. neqo-latest: H DC LR C20 M S R Z 3 B A L1 L2 C1 C2 6 V2 BP BA
- chrome vs. neqo-latest: 3
- go-x-net vs. neqo-latest: H DC LR M B U A L2 C2 6 BP BA
- kwik vs. neqo-latest: H DC LR C20 M S R Z 3 B U A L1 L2 C1 C2 6 V2
- linuxquic vs. neqo-latest: H DC LR C20 M S R Z 3 B U E A L1 L2 C1 C2 6 V2 BP BA CM
- lsquic vs. neqo-latest: H DC LR C20 M S R 3 B E A :warning:L1 L2 :rocket:~~C1~~ C2 6 V2 BP BA CM
- msquic vs. neqo-latest: H DC LR C20 M S R Z B A L1 L2 C1 C2 6 V2 BP BA
- mvfst vs. neqo-latest: H DC LR M 3 B L2 C2 6 BP BA
- neqo vs. neqo-latest: H DC LR C20 M S R Z 3 B U E L1 L2 C1 C2 6 V2 BP BA CM
- ngtcp2 vs. neqo-latest: H DC LR C20 M S R Z 3 B U E A L1 L2 C1 C2 6 V2 BP BA CM
- openssl vs. neqo-latest: H DC C20 S R 3 B L2 C2 6 BP BA
- picoquic vs. neqo-latest: H DC LR C20 M S R Z 3 B U E A L1 L2 C1 C2 6 V2 BP BA CM
- quic-go vs. neqo-latest: H DC LR C20 M S R Z 3 B U A L1 L2 C1 C2 6 BP BA
- quinn vs. neqo-latest: H DC LR C20 M S R Z 3 B U E A L1 L2 C1 C2 6 BP BA
- s2n-quic vs. neqo-latest: H DC LR M S R 3 B E A L1 L2 C1 C2 6 BP BA
- tquic vs. neqo-latest: H DC LR M S R Z 3 B A L1 L2 C1 C2 6 BP BA
- xquic vs. neqo-latest: H DC LR C20 S R Z 3 B U A L1 L2 C1 C2 6 BP BA
Unsupported Interop Tests
QUIC Interop Runner, client vs. server
neqo-latest as client
- neqo-latest vs. aioquic: E CM
- neqo-latest vs. go-x-net: C20 S R Z 3 E L1 C1 V2 CM
- neqo-latest vs. haproxy: E CM
- neqo-latest vs. kwik: E CM
- neqo-latest vs. msquic: 3 E CM
- neqo-latest vs. mvfst: C20 S E V2 CM
- neqo-latest vs. nginx: E V2 CM
- neqo-latest vs. quic-go: E V2 CM
- neqo-latest vs. quiche: E V2 CM
- neqo-latest vs. quinn: V2 CM
- neqo-latest vs. s2n-quic: Z V2
- neqo-latest vs. tquic: E V2 CM
- neqo-latest vs. xquic: S E V2 CM
neqo-latest as server
- aioquic vs. neqo-latest: U E
- chrome vs. neqo-latest: H DC LR C20 M S R Z B U E A L1 L2 C1 C2 6 V2 BP BA CM
- go-x-net vs. neqo-latest: C20 S R Z 3 E L1 C1 V2
- kwik vs. neqo-latest: E
- lsquic vs. neqo-latest: Z U
- msquic vs. neqo-latest: 3 E
- mvfst vs. neqo-latest: C20 S R U E V2
- openssl vs. neqo-latest: Z U E L1 C1 V2
- quic-go vs. neqo-latest: E V2
- s2n-quic vs. neqo-latest: C20 Z U V2
- tquic vs. neqo-latest: C20 U E V2
- xquic vs. neqo-latest: E V2
Client/server transfer results
Performance differences relative to 160f416cc3753963c4535b850f61ea82d6584c1c.
Transfer of 33554432 bytes over loopback, min. 100 runs. All unit-less numbers are in milliseconds.
| Client vs. server (params) | Mean ± σ | Min | Max | MiB/s ± σ | Δ main |
Δ main |
|---|---|---|---|---|---|---|
| google vs. google | 456.9 ± 3.6 | 449.7 | 469.7 | 70.0 ± 8.9 | ||
| google vs. neqo (cubic, paced) | 277.4 ± 4.6 | 268.9 | 287.1 | 115.3 ± 7.0 | 0.1 | 0.0% |
| msquic vs. msquic | 163.3 ± 37.9 | 135.9 | 431.4 | 196.0 ± 0.8 | ||
| msquic vs. neqo (cubic, paced) | 196.9 ± 36.7 | 159.6 | 445.1 | 162.5 ± 0.9 | 8.2 | 4.4% |
| neqo vs. google (cubic, paced) | 759.2 ± 4.9 | 751.3 | 776.9 | 42.1 ± 6.5 | 1.2 | 0.2% |
| neqo vs. msquic (cubic, paced) | 156.3 ± 4.3 | 150.6 | 165.2 | 204.7 ± 7.4 | -0.9 | -0.6% |
| neqo vs. neqo (cubic) | 93.2 ± 6.3 | 84.8 | 122.5 | 343.3 ± 5.1 | 0.1 | 0.1% |
| neqo vs. neqo (cubic, paced) | 93.6 ± 4.3 | 86.2 | 101.7 | 342.0 ± 7.4 | :broken_heart: 1.3 | 1.4% |
| neqo vs. neqo (reno) | 91.5 ± 5.1 | 84.7 | 113.8 | 349.9 ± 6.3 | :green_heart: -1.6 | -1.8% |
| neqo vs. neqo (reno, paced) | 94.1 ± 6.5 | 82.8 | 122.1 | 340.1 ± 4.9 | -0.4 | -0.4% |
| neqo vs. quiche (cubic, paced) | 194.8 ± 4.6 | 187.2 | 204.3 | 164.3 ± 7.0 | -0.3 | -0.1% |
| neqo vs. s2n (cubic, paced) | 219.5 ± 4.4 | 212.6 | 235.7 | 145.8 ± 7.3 | :green_heart: -1.5 | -0.7% |
| quiche vs. neqo (cubic, paced) | 149.5 ± 4.6 | 137.5 | 159.4 | 214.1 ± 7.0 | :green_heart: -1.5 | -1.0% |
| quiche vs. quiche | 145.7 ± 5.1 | 136.4 | 156.6 | 219.6 ± 6.3 | ||
| s2n vs. neqo (cubic, paced) | 172.7 ± 4.7 | 163.8 | 182.5 | 185.3 ± 6.8 | -0.7 | -0.4% |
| s2n vs. s2n | 248.4 ± 22.0 | 234.2 | 344.7 | 128.8 ± 1.5 |
Download data for profiler.firefox.com or download performance comparison data.
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | On-prem |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| google vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 277.42 ms(-0.15%)Baseline: 277.85 ms | 281.00 ms (98.73%) |
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | On-prem |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| msquic vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 196.92 ms(+2.19%)Baseline: 192.71 ms | 204.34 ms (96.37%) |
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | On-prem |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. google (cubic, paced) | 📈 view plot 🚷 view threshold | 759.22 ms(+0.18%)Baseline: 757.86 ms | 764.95 ms (99.25%) |
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | On-prem |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. msquic (cubic, paced) | 📈 view plot 🚷 view threshold | 156.35 ms(-0.32%)Baseline: 156.84 ms | 158.50 ms (98.64%) |
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | On-prem |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. neqo (cubic) | 📈 view plot 🚷 view threshold | 93.22 ms(+1.13%)Baseline: 92.18 ms | 94.64 ms (98.50%) |
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | On-prem |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 93.58 ms(+0.50%)Baseline: 93.12 ms | 95.39 ms (98.10%) |
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | On-prem |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. neqo (reno) | 📈 view plot 🚷 view threshold | 91.46 ms(-0.14%)Baseline: 91.59 ms | 94.13 ms (97.16%) |
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | On-prem |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. neqo (reno, paced) | 📈 view plot 🚷 view threshold | 94.10 ms(+0.85%)Baseline: 93.31 ms | 95.66 ms (98.37%) |
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | On-prem |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. quiche (cubic, paced) | 📈 view plot 🚷 view threshold | 194.79 ms(+0.46%)Baseline: 193.91 ms | 197.22 ms (98.77%) |
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | On-prem |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. s2n (cubic, paced) | 📈 view plot 🚷 view threshold | 219.50 ms(-0.60%)Baseline: 220.82 ms | 223.69 ms (98.13%) |
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | On-prem |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| quiche vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 149.47 ms(-0.94%)Baseline: 150.89 ms | 152.75 ms (97.85%) |
Bencher Report
| Branch | cubic_rfc9438 |
| Testbed | On-prem |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| s2n vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 172.67 ms(-0.16%)Baseline: 172.95 ms | 176.03 ms (98.09%) |
Benchmark results
Performance differences relative to 160f416cc3753963c4535b850f61ea82d6584c1c.
1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client: No change in performance detected.
time: [207.08 ms 207.37 ms 207.66 ms]
thrpt: [481.57 MiB/s 482.24 MiB/s 482.92 MiB/s]
change:
time: [−0.3977% −0.1486% +0.0694%] (p = 0.22 > 0.05)
thrpt: [−0.0693% +0.1488% +0.3992%]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client: Change within noise threshold.
time: [294.47 ms 295.87 ms 297.27 ms]
thrpt: [33.640 Kelem/s 33.799 Kelem/s 33.960 Kelem/s]
change:
time: [+0.1427% +0.8718% +1.5941%] (p = 0.02 −0.8643% −0.1425%]
1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client: No change in performance detected.
time: [28.117 ms 28.243 ms 28.384 ms]
thrpt: [35.231 B/s 35.407 B/s 35.566 B/s]
change:
time: [−0.4916% +0.2408% +0.9536%] (p = 0.51 > 0.05)
thrpt: [−0.9446% −0.2402% +0.4941%]
Found 6 outliers among 100 measurements (6.00%)
1 (1.00%) high mild
5 (5.00%) high severe
1-conn/1-100mb-req/mtu-1504 (aka. Upload)/client: :broken_heart: Performance has regressed.
time: [211.87 ms 212.28 ms 212.83 ms]
thrpt: [469.86 MiB/s 471.08 MiB/s 471.99 MiB/s]
change:
time: [+1.2315% +1.4917% +1.7640%] (p = 0.00 −1.4698% −1.2165%]
Found 5 outliers among 100 measurements (5.00%)
1 (1.00%) low severe
1 (1.00%) low mild
2 (2.00%) high mild
1 (1.00%) high severe
decode 4096 bytes, mask ff: No change in performance detected.
time: [11.810 µs 11.854 µs 11.902 µs]
change: [−0.2518% +0.0905% +0.4601%] (p = 0.61 > 0.05)
Found 17 outliers among 100 measurements (17.00%)
2 (2.00%) low severe
4 (4.00%) low mild
1 (1.00%) high mild
10 (10.00%) high severe
decode 1048576 bytes, mask ff: No change in performance detected.
time: [3.0236 ms 3.0408 ms 3.0657 ms]
change: [−0.7917% +0.0592% +1.1206%] (p = 0.90 > 0.05)
Found 11 outliers among 100 measurements (11.00%)
1 (1.00%) high mild
10 (10.00%) high severe
decode 4096 bytes, mask 7f: No change in performance detected.
time: [19.930 µs 19.976 µs 20.029 µs]
change: [−0.3214% +0.1029% +0.6479%] (p = 0.70 > 0.05)
Found 18 outliers among 100 measurements (18.00%)
2 (2.00%) low severe
5 (5.00%) low mild
1 (1.00%) high mild
10 (10.00%) high severe
decode 1048576 bytes, mask 7f: No change in performance detected.
time: [5.0492 ms 5.0619 ms 5.0752 ms]
change: [−0.3496% +0.0201% +0.3850%] (p = 0.92 > 0.05)
Found 15 outliers among 100 measurements (15.00%)
15 (15.00%) high severe
decode 4096 bytes, mask 3f: No change in performance detected.
time: [8.2775 µs 8.3115 µs 8.3519 µs]
change: [−2.8281% −0.5095% +0.9836%] (p = 0.74 > 0.05)
Found 19 outliers among 100 measurements (19.00%)
4 (4.00%) low mild
2 (2.00%) high mild
13 (13.00%) high severe
decode 1048576 bytes, mask 3f: No change in performance detected.
time: [1.5880 ms 1.5959 ms 1.6064 ms]
change: [−1.1204% −0.0983% +0.7741%] (p = 0.86 > 0.05)
Found 9 outliers among 100 measurements (9.00%)
1 (1.00%) high mild
8 (8.00%) high severe
1-streams/each-1000-bytes/wallclock-time: No change in performance detected.
time: [610.82 µs 612.99 µs 615.45 µs]
change: [−0.6791% −0.1470% +0.3578%] (p = 0.58 > 0.05)
Found 10 outliers among 100 measurements (10.00%)
1 (1.00%) low mild
2 (2.00%) high mild
7 (7.00%) high severe
1-streams/each-1000-bytes/simulated-time
time: [116.60 ms 116.79 ms 116.98 ms]
thrpt: [8.3480 KiB/s 8.3617 KiB/s 8.3756 KiB/s]
change:
time: [−0.1839% +0.0617% +0.3003%] (p = 0.63 > 0.05)
thrpt: [−0.2994% −0.0617% +0.1843%]
No change in performance detected.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) low mild
1000-streams/each-1-bytes/wallclock-time: No change in performance detected.
time: [14.597 ms 14.656 ms 14.746 ms]
change: [−0.3706% +0.1360% +0.7868%] (p = 0.69 > 0.05)
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
1000-streams/each-1-bytes/simulated-time
time: [14.971 s 14.985 s 14.999 s]
thrpt: [66.669 B/s 66.733 B/s 66.796 B/s]
change:
time: [−0.1394% −0.0079% +0.1227%] (p = 0.90 > 0.05)
thrpt: [−0.1225% +0.0079% +0.1396%]
No change in performance detected.
1000-streams/each-1000-bytes/wallclock-time: :broken_heart: Performance has regressed. 1000-streams/each-1000-bytes/simulated-time time: [18.033 s 18.178 s 18.325 s] thrpt: [53.292 KiB/s 53.722 KiB/s 54.153 KiB/s] change: time: [−4.8366% −3.6293% −2.4180%] (p = 0.00 +3.7660% +5.0825%] :green_heart: Performance has improved.
time: [57.040 ms 57.230 ms 57.421 ms]
change: [+1.9218% +2.4612% +2.9489%] (p = 0.00 coalesce_acked_from_zero 1+1 entries: No change in performance detected.
time: [88.302 ns 88.669 ns 89.045 ns]
change: [−2.0795% −0.2469% +0.9096%] (p = 0.82 > 0.05)
Found 5 outliers among 100 measurements (5.00%)
3 (3.00%) high mild
2 (2.00%) high severe
coalesce_acked_from_zero 3+1 entries: No change in performance detected.
time: [106.23 ns 106.97 ns 107.99 ns]
change: [−0.2959% +0.2967% +0.9973%] (p = 0.41 > 0.05)
Found 19 outliers among 100 measurements (19.00%)
5 (5.00%) high mild
14 (14.00%) high severe
coalesce_acked_from_zero 10+1 entries: No change in performance detected.
time: [105.59 ns 106.08 ns 106.65 ns]
change: [−0.4444% −0.0123% +0.4264%] (p = 0.96 > 0.05)
Found 14 outliers among 100 measurements (14.00%)
2 (2.00%) low mild
4 (4.00%) high mild
8 (8.00%) high severe
coalesce_acked_from_zero 1000+1 entries: No change in performance detected.
time: [88.729 ns 88.824 ns 88.936 ns]
change: [−1.1331% −0.0597% +1.0515%] (p = 0.92 > 0.05)
Found 11 outliers among 100 measurements (11.00%)
4 (4.00%) high mild
7 (7.00%) high severe
RxStreamOrderer::inbound_frame(): Change within noise threshold.
time: [106.97 ms 107.12 ms 107.37 ms]
change: [−1.3158% −1.1549% −0.8732%] (p = 0.00 Found 24 outliers among 100 measurements (24.00%)
7 (7.00%) low severe
3 (3.00%) low mild
11 (11.00%) high mild
3 (3.00%) high severesent::Packets::take_ranges: No change in performance detected.
time: [4.7070 µs 4.8177 µs 4.9198 µs]
change: [−2.3639% +1.0496% +4.4989%] (p = 0.55 > 0.05)
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
transfer/pacing-false/varying-seeds/wallclock-time/run: Change within noise threshold.
time: [26.257 ms 26.304 ms 26.359 ms]
change: [+1.6439% +1.8795% +2.1267%] (p = 0.00 Found 4 outliers among 100 measurements (4.00%)
2 (2.00%) low mild
1 (1.00%) high mild
1 (1.00%) high severetransfer/pacing-false/varying-seeds/simulated-time/run: Change within noise threshold.
time: [25.369 s 25.405 s 25.440 s]
thrpt: [161.00 KiB/s 161.23 KiB/s 161.45 KiB/s]
change:
time: [+0.8166% +1.0203% +1.2139%] (p = 0.00 −1.0100% −0.8100%]
Found 6 outliers among 100 measurements (6.00%)
3 (3.00%) low mild
3 (3.00%) high mild
transfer/pacing-true/varying-seeds/wallclock-time/run: Change within noise threshold.
time: [26.653 ms 26.719 ms 26.787 ms]
change: [−1.3297% −0.9743% −0.6461%] (p = 0.00 transfer/pacing-true/varying-seeds/simulated-time/run: Change within noise threshold.
time: [25.108 s 25.153 s 25.199 s]
thrpt: [162.55 KiB/s 162.84 KiB/s 163.13 KiB/s]
change:
time: [+0.3156% +0.5560% +0.7966%] (p = 0.00 −0.5529% −0.3146%]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
transfer/pacing-false/same-seed/wallclock-time/run: Change within noise threshold.
time: [26.628 ms 26.666 ms 26.716 ms]
change: [+2.2614% +2.4472% +2.6530%] (p = 0.00 Found 6 outliers among 100 measurements (6.00%)
3 (3.00%) low mild
1 (1.00%) high mild
2 (2.00%) high severetransfer/pacing-false/same-seed/simulated-time/run: :broken_heart: Performance has regressed.
time: [26.955 s 26.955 s 26.955 s]
thrpt: [151.96 KiB/s 151.96 KiB/s 151.96 KiB/s]
change:
time: [+7.1714% +7.1714% +7.1714%] (p = 0.00 −6.6915% −6.6915%]
:
:
:
:
gnuplot> plot '-' binary endian=little record=500 format='%float64' using 1:2:3 with filledcurves fillstyle solid 0.25 noborder lc rgb '#1f78b4' title 't distribution', '-' binary endian=little record=2 format='%float64' using 1:2 axes x1y2 with lines lt 1 lw 2 lc rgb '#1f78b4' title 't statistic' ^ line 0: all points y2 value undefined!
:
:
transfer/pacing-true/same-seed/wallclock-time/run: :green_heart: Performance has improved.
time: [25.834 ms 25.865 ms 25.912 ms]
change: [−6.2333% −6.0838% −5.8971%] (p = 0.00 Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) high mild
1 (1.00%) high severetransfer/pacing-true/same-seed/simulated-time/run: :green_heart: Performance has improved.
time: [24.946 s 24.946 s 24.946 s]
thrpt: [164.19 KiB/s 164.19 KiB/s 164.19 KiB/s]
change:
time: [−2.5095% −2.5095% −2.5095%] (p = 0.00 +2.5740% +2.5740%]
:
:
:
:
gnuplot> plot '-' binary endian=little record=500 format='%float64' using 1:2:3 with filledcurves fillstyle solid 0.25 noborder lc rgb '#1f78b4' title 't distribution', '-' binary endian=little record=2 format='%float64' using 1:2 axes x1y2 with lines lt 1 lw 2 lc rgb '#1f78b4' title 't statistic' ^ line 0: all points y2 value undefined!
:
:
Download data for profiler.firefox.com or download performance comparison data.