fix: Do CC reaction before `largest_acked`
Packets are only declared as lost relative to largest_acked. If we hit a PTO while we don't have a largest_acked yet, also do a congestion control reaction (because otherwise none would happen).
Broken out of #1998
Codecov Report
Attention: Patch coverage is 98.14815% with 1 line in your changes missing coverage. Please review.
Project coverage is 95.41%. Comparing base (
8b4a9c9) to head (da76a17). Report is 77 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #2117 +/- ##
==========================================
- Coverage 95.41% 95.41% -0.01%
==========================================
Files 115 115
Lines 36996 37018 +22
Branches 36996 37018 +22
==========================================
+ Hits 35301 35321 +20
- Misses 1689 1691 +2
Partials 6 6
| Components | Coverage Δ | |
|---|---|---|
| neqo-common | 97.17% <ø> (ø) |
|
| neqo-crypto | 90.44% <ø> (ø) |
|
| neqo-http3 | 94.50% <ø> (ø) |
|
| neqo-qpack | 96.29% <ø> (ø) |
|
| neqo-transport | 96.24% <98.14%> (-0.01%) |
:arrow_down: |
| neqo-udp | 94.70% <ø> (-0.59%) |
:arrow_down: |
Failed Interop Tests
QUIC Interop Runner, client vs. server, differences relative to 82602cd746a20e3960874c6eabdc5fb25b0aef43.
neqo-latest as client
- neqo-latest vs. aioquic: Z
- neqo-latest vs. go-x-net: BP BA
- neqo-latest vs. haproxy: :warning:L1 BP BA
- neqo-latest vs. kwik: :rocket:~~BP~~ :warning:C1 BA
- neqo-latest vs. lsquic: L1 C1
- neqo-latest vs. msquic: :warning:R Z A L1 C1
- neqo-latest vs. mvfst: A L1 C1 :rocket:~~BA~~
- neqo-latest vs. nginx: BP BA
- neqo-latest vs. ngtcp2: CM
- neqo-latest vs. picoquic: :rocket:~~Z~~ A L1 :warning:C1
- neqo-latest vs. quic-go: A
- neqo-latest vs. quiche: BP BA
- neqo-latest vs. s2n-quic: BP BA CM
- neqo-latest vs. tquic: S BP BA
- neqo-latest vs. xquic: A
neqo-latest as server
- aioquic vs. neqo-latest: run cancelled after 20 min
- go-x-net vs. neqo-latest: CM
- kwik vs. neqo-latest: BP BA CM
- lsquic vs. neqo-latest: run cancelled after 20 min
- msquic vs. neqo-latest: Z U CM
- mvfst vs. neqo-latest: Z A L1 C1 CM
- openssl vs. neqo-latest: LR M CM
- quic-go vs. neqo-latest: CM
- quiche vs. neqo-latest: CM
- quinn vs. neqo-latest: V2 CM
- s2n-quic vs. neqo-latest: CM
- tquic vs. neqo-latest: CM
- xquic vs. neqo-latest: M CM
All results
Succeeded Interop Tests
QUIC Interop Runner, client vs. server
neqo-latest as client
- neqo-latest vs. aioquic: H DC LR C20 M S R 3 B U A L1 L2 C1 C2 6 V2 BP BA
- neqo-latest vs. go-x-net: H DC LR M B U A L2 C2 6
- neqo-latest vs. haproxy: H DC LR C20 M S R Z 3 B U A :warning:L1 L2 C1 C2 6 V2
- neqo-latest vs. kwik: H DC LR C20 M S R Z 3 B U A L1 L2 :warning:C1 C2 6 V2 :rocket:~~BP~~
- neqo-latest vs. lsquic: H DC LR C20 M S R Z 3 B U E A L2 C2 6 V2 BP BA
- neqo-latest vs. msquic: H DC LR C20 M S :warning:R B U L2 C2 6 V2 BP BA
- neqo-latest vs. mvfst: H DC LR M R Z 3 B U L2 C2 6 BP :rocket:~~BA~~
- neqo-latest vs. neqo: H DC LR C20 M S R Z 3 B U E A L1 L2 C1 C2 6 V2 BP BA CM
- neqo-latest vs. neqo-latest: H DC LR C20 M S R Z 3 B U E A L1 L2 C1 C2 6 V2 BP BA CM
- neqo-latest vs. nginx: H DC LR C20 M S R Z 3 B U A L1 L2 C1 C2 6
- neqo-latest vs. ngtcp2: H DC LR C20 M S R Z 3 B U E A L1 L2 C1 C2 6 V2 BP BA
- neqo-latest vs. picoquic: H DC LR C20 M S R :rocket:~~Z~~ 3 B U E L2 :warning:C1 C2 6 V2 BP BA
- neqo-latest vs. quic-go: H DC LR C20 M S R Z 3 B U L1 L2 C1 C2 6 BP BA
- neqo-latest vs. quiche: H DC LR C20 M S R Z 3 B U A L1 L2 C1 C2 6
- neqo-latest vs. quinn: H DC LR C20 M S R Z 3 B U E A L1 L2 C1 C2 6 BP BA
- neqo-latest vs. s2n-quic: H DC LR C20 M S R 3 B U E A L1 L2 C1 C2 6
- neqo-latest vs. tquic: H DC LR C20 M R Z 3 B U A L1 L2 C1 C2 6
- neqo-latest vs. xquic: H DC LR C20 M R Z 3 B U L1 L2 C1 C2 6 BP BA
neqo-latest as server
- chrome vs. neqo-latest: 3
- go-x-net vs. neqo-latest: H DC LR M B U A L2 C2 6 BP BA
- kwik vs. neqo-latest: H DC LR C20 M S R Z 3 B U A L1 L2 C1 C2 6 V2
- msquic vs. neqo-latest: H DC LR C20 M S R B A L1 L2 C1 C2 6 V2 BP BA
- mvfst vs. neqo-latest: H DC LR M 3 B L2 C2 6 BP BA
- neqo vs. neqo-latest: H DC LR C20 M S R Z 3 B U E A L1 L2 C1 C2 6 V2 BP BA CM
- ngtcp2 vs. neqo-latest: H DC LR C20 M S R Z 3 B U E A L1 L2 C1 C2 6 V2 BP BA CM
- openssl vs. neqo-latest: H DC C20 S R 3 B A L2 C2 6 BP BA
- picoquic vs. neqo-latest: H DC LR C20 M S R Z 3 B U E A L1 L2 C1 C2 6 V2 BP BA CM
- quic-go vs. neqo-latest: H DC LR C20 M S R Z 3 B U A L1 L2 C1 C2 6 BP BA
- quiche vs. neqo-latest: H DC LR M S R Z 3 B A L1 L2 C1 C2 6 BP BA
- quinn vs. neqo-latest: H DC LR C20 M S R Z 3 B U E A L1 L2 C1 C2 6 BP BA
- s2n-quic vs. neqo-latest: H DC LR M S R 3 B E A L1 L2 C1 C2 6 BP BA
- tquic vs. neqo-latest: H DC LR M S R Z 3 B A L1 L2 C1 C2 6 BP BA
- xquic vs. neqo-latest: H DC LR C20 S R Z 3 B U A L1 L2 C1 C2 6 BP BA
Unsupported Interop Tests
QUIC Interop Runner, client vs. server
neqo-latest as client
- neqo-latest vs. aioquic: E CM
- neqo-latest vs. go-x-net: C20 S R Z 3 E L1 C1 V2 CM
- neqo-latest vs. haproxy: E CM
- neqo-latest vs. kwik: E CM
- neqo-latest vs. lsquic: CM
- neqo-latest vs. msquic: 3 E CM
- neqo-latest vs. mvfst: C20 S E V2 CM
- neqo-latest vs. nginx: E V2 CM
- neqo-latest vs. picoquic: CM
- neqo-latest vs. quic-go: E V2 CM
- neqo-latest vs. quiche: E V2 CM
- neqo-latest vs. quinn: V2 CM
- neqo-latest vs. s2n-quic: Z V2
- neqo-latest vs. tquic: E V2 CM
- neqo-latest vs. xquic: S E V2 CM
neqo-latest as server
- chrome vs. neqo-latest: H DC LR C20 M S R Z B U E A L1 L2 C1 C2 6 V2 BP BA CM
- go-x-net vs. neqo-latest: C20 S R Z 3 E L1 C1 V2
- kwik vs. neqo-latest: E
- msquic vs. neqo-latest: 3 E
- mvfst vs. neqo-latest: C20 S R U E V2
- openssl vs. neqo-latest: Z U E L1 C1 V2
- quic-go vs. neqo-latest: E V2
- quiche vs. neqo-latest: C20 U E V2
- s2n-quic vs. neqo-latest: C20 Z U V2
- tquic vs. neqo-latest: C20 U E V2
- xquic vs. neqo-latest: E V2
Benchmark results
Performance differences relative to 8b4a9c961ad6f2a0e78a3ff08356d055a9c0b39e.
decode 4096 bytes, mask ff: No change in performance detected.
time: [11.752 µs 11.792 µs 11.837 µs]
change: [-0.5325% -0.0005% +0.5403%] (p = 1.00 > 0.05)
Found 12 outliers among 100 measurements (12.00%)
2 (2.00%) low mild
1 (1.00%) high mild
9 (9.00%) high severe
decode 1048576 bytes, mask ff: No change in performance detected.
time: [2.8929 ms 2.9022 ms 2.9132 ms]
change: [-0.2437% +0.1980% +0.6539%] (p = 0.41 > 0.05)
Found 10 outliers among 100 measurements (10.00%)
1 (1.00%) low mild
9 (9.00%) high severe
decode 4096 bytes, mask 7f: No change in performance detected.
time: [19.633 µs 19.684 µs 19.741 µs]
change: [-0.2067% +0.1321% +0.4886%] (p = 0.46 > 0.05)
Found 18 outliers among 100 measurements (18.00%)
1 (1.00%) low severe
3 (3.00%) low mild
14 (14.00%) high severe
decode 1048576 bytes, mask 7f: No change in performance detected.
time: [4.7014 ms 4.7124 ms 4.7248 ms]
change: [-0.3528% -0.0041% +0.3916%] (p = 0.98 > 0.05)
Found 13 outliers among 100 measurements (13.00%)
1 (1.00%) high mild
12 (12.00%) high severe
decode 4096 bytes, mask 3f: No change in performance detected.
time: [6.1958 µs 6.2251 µs 6.2609 µs]
change: [-0.2547% +0.2551% +0.8888%] (p = 0.39 > 0.05)
Found 10 outliers among 100 measurements (10.00%)
1 (1.00%) high mild
9 (9.00%) high severe
decode 1048576 bytes, mask 3f: No change in performance detected.
time: [2.1065 ms 2.1148 ms 2.1245 ms]
change: [-0.5892% -0.0646% +0.4610%] (p = 0.84 > 0.05)
Found 10 outliers among 100 measurements (10.00%)
1 (1.00%) low mild
9 (9.00%) high severe
1 streams of 1 bytes/multistream: Change within noise threshold.
time: [64.570 µs 64.685 µs 64.807 µs]
change: [+0.1665% +0.4429% +0.7116%] (p = 0.00 Found 6 outliers among 100 measurements (6.00%)
6 (6.00%) high mild1000 streams of 1 bytes/multistream: Change within noise threshold.
time: [24.310 ms 24.350 ms 24.389 ms]
change: [-0.7491% -0.5219% -0.3028%] (p = 0.00 10000 streams of 1 bytes/multistream: :green_heart: Performance has improved.
time: [1.6398 s 1.6415 s 1.6431 s]
change: [-1.4953% -1.3587% -1.2240%] (p = 0.00 Found 11 outliers among 100 measurements (11.00%)
3 (3.00%) low mild
8 (8.00%) high mild1 streams of 1000 bytes/multistream: No change in performance detected.
time: [65.717 µs 65.839 µs 65.968 µs]
change: [-0.1525% +0.1051% +0.3744%] (p = 0.43 > 0.05)
100 streams of 1000 bytes/multistream: :green_heart: Performance has improved.
time: [3.1479 ms 3.1533 ms 3.1594 ms]
change: [-3.2281% -2.9738% -2.6972%] (p = 0.00 Found 15 outliers among 100 measurements (15.00%)
15 (15.00%) high severe1000 streams of 1000 bytes/multistream: :green_heart: Performance has improved.
time: [137.01 ms 137.08 ms 137.16 ms]
change: [-5.5505% -5.4711% -5.3926%] (p = 0.00 coalesce_acked_from_zero 1+1 entries: No change in performance detected.
time: [92.549 ns 92.787 ns 93.027 ns]
change: [-0.6448% -0.0781% +0.4403%] (p = 0.79 > 0.05)
Found 9 outliers among 100 measurements (9.00%)
7 (7.00%) high mild
2 (2.00%) high severe
coalesce_acked_from_zero 3+1 entries: No change in performance detected.
time: [110.32 ns 110.59 ns 110.89 ns]
change: [-0.2892% +0.0340% +0.4134%] (p = 0.85 > 0.05)
Found 17 outliers among 100 measurements (17.00%)
1 (1.00%) low mild
7 (7.00%) high mild
9 (9.00%) high severe
coalesce_acked_from_zero 10+1 entries: No change in performance detected.
time: [109.83 ns 110.14 ns 110.53 ns]
change: [-2.1370% -0.8102% +0.0472%] (p = 0.17 > 0.05)
Found 16 outliers among 100 measurements (16.00%)
3 (3.00%) low severe
3 (3.00%) low mild
7 (7.00%) high mild
3 (3.00%) high severe
coalesce_acked_from_zero 1000+1 entries: No change in performance detected.
time: [91.777 ns 91.820 ns 91.861 ns]
change: [-0.8796% +0.0394% +0.9526%] (p = 0.94 > 0.05)
Found 13 outliers among 100 measurements (13.00%)
5 (5.00%) high mild
8 (8.00%) high severe
RxStreamOrderer::inbound_frame(): No change in performance detected.
time: [115.65 ms 115.69 ms 115.74 ms]
change: [-0.0920% -0.0336% +0.0224%] (p = 0.25 > 0.05)
Found 16 outliers among 100 measurements (16.00%)
2 (2.00%) low severe
4 (4.00%) low mild
5 (5.00%) high mild
5 (5.00%) high severe
SentPackets::take_ranges: No change in performance detected.
time: [5.1866 µs 5.2865 µs 5.3865 µs]
change: [-2.2685% +0.2748% +2.9005%] (p = 0.84 > 0.05)
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
transfer/pacing-false/varying-seeds: :green_heart: Performance has improved.
time: [34.239 ms 34.303 ms 34.367 ms]
change: [-4.3117% -4.0481% -3.7785%] (p = 0.00 Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mildtransfer/pacing-true/varying-seeds: :green_heart: Performance has improved.
time: [34.512 ms 34.562 ms 34.611 ms]
change: [-3.7446% -3.5289% -3.3289%] (p = 0.00 Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mildtransfer/pacing-false/same-seed: Change within noise threshold.
time: [34.474 ms 34.519 ms 34.564 ms]
change: [-3.3470% -3.1536% -2.9550%] (p = 0.00 Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) low mild
1 (1.00%) high mildtransfer/pacing-true/same-seed: Change within noise threshold.
time: [34.936 ms 34.991 ms 35.047 ms]
change: [-3.1292% -2.9094% -2.7053%] (p = 0.00 1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client: Change within noise threshold.
time: [2.2149 s 2.2222 s 2.2295 s]
thrpt: [44.854 MiB/s 45.001 MiB/s 45.148 MiB/s]
change:
time: [+0.2981% +0.7971% +1.2996%] (p = 0.00 -0.7908% -0.2972%]
1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client: No change in performance detected.
time: [387.40 ms 389.50 ms 391.63 ms]
thrpt: [25.534 Kelem/s 25.674 Kelem/s 25.813 Kelem/s]
change:
time: [-0.9899% -0.2387% +0.5120%] (p = 0.55 > 0.05)
thrpt: [-0.5094% +0.2392% +0.9997%]
Found 5 outliers among 100 measurements (5.00%)
1 (1.00%) low mild
4 (4.00%) high mild
1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client: :green_heart: Performance has improved.
time: [27.640 ms 28.317 ms 29.000 ms]
thrpt: [34.483 elem/s 35.315 elem/s 36.180 elem/s]
change:
time: [-9.6417% -6.3309% -2.8998%] (p = 0.00 +6.7588% +10.671%]
1-conn/1-100mb-resp/mtu-1504 (aka. Upload)/client: :green_heart: Performance has improved.
time: [3.1659 s 3.1898 s 3.2168 s]
thrpt: [31.086 MiB/s 31.350 MiB/s 31.586 MiB/s]
change:
time: [-10.442% -9.5351% -8.5913%] (p = 0.00 +10.540% +11.660%]
Found 3 outliers among 100 measurements (3.00%)
3 (3.00%) high severe
Client/server transfer results
Performance differences relative to 8b4a9c961ad6f2a0e78a3ff08356d055a9c0b39e.
Transfer of 33554432 bytes over loopback, 30 runs. All unit-less numbers are in milliseconds.
| Client | Server | CC | Pacing | Mean ± σ | Min | Max | Δ main |
Δ main |
|---|---|---|---|---|---|---|---|---|
| neqo | neqo | reno | on | 499.8 ± 42.8 | 443.3 | 651.4 | 13.1 | 0.7% |
| neqo | neqo | reno | 501.0 ± 46.5 | 449.6 | 673.6 | -10.1 | -0.5% | |
| neqo | neqo | cubic | on | 518.7 ± 40.6 | 473.1 | 670.5 | 6.7 | 0.3% |
| neqo | neqo | cubic | 507.9 ± 35.3 | 459.8 | 598.8 | -11.4 | -0.6% | |
| neqo | reno | on | 901.3 ± 95.1 | 647.4 | 997.3 | 4.7 | 0.1% | |
| neqo | reno | 899.2 ± 90.9 | 665.4 | 996.3 | 3.3 | 0.1% | ||
| neqo | cubic | on | 894.3 ± 103.3 | 636.9 | 1113.0 | 1.6 | 0.0% | |
| neqo | cubic | 894.2 ± 89.8 | 665.7 | 1013.5 | 4.2 | 0.1% | ||
| 541.2 ± 43.4 | 519.2 | 760.5 | 2.7 | 0.1% | ||||
| neqo | msquic | reno | on | 221.6 ± 35.5 | 196.0 | 391.0 | -0.3 | -0.0% |
| neqo | msquic | reno | 217.1 ± 11.7 | 199.8 | 247.4 | -10.8 | -1.2% | |
| neqo | msquic | cubic | on | 212.5 ± 12.4 | 192.8 | 244.0 | -20.1 | -2.3% |
| neqo | msquic | cubic | 225.2 ± 33.3 | 198.2 | 385.2 | 1.5 | 0.2% | |
| msquic | msquic | 119.1 ± 23.8 | 102.0 | 235.7 | -8.3 | -1.7% |
Firefox builds for this PR
The following builds are available for testing. Crossed-out builds did not succeed.
@martinthomson I'd appreciate a review, since the code I am touching is pretty complex.
This patch is in conflict with RFC 9002, right?
A PTO timer expiration event does not indicate packet loss and MUST NOT cause prior unacknowledged packets to be marked as lost.
https://www.rfc-editor.org/rfc/rfc9002.html#section-6.2
IIRC (can't verify right now) the issue I was trying to fix is that if a client is trying to start a connection to an unresponsive server, we would exponentially back off the retransmission, but not halve the cwnd.
The question is if we want to halve the cwnd. Proper CC would want is to (those losses should be taken as signs of congestion), but it will of course lower performance, esp. if there are PMTUD or ECN issues.
The reason we are marking the CI as lost is that we'd otherwise never retransmit (IIRC).
The reason we are marking the CI as lost is that we'd otherwise never retransmit (IIRC).
https://github.com/mozilla/neqo/pull/2129 should fix this, right?
Yes. Sorry, this is all still hanging together in my mind due to the big unified PR.
@mxinden is there still anything left here to do now that #2492 is in?
@larseggert I believe this can be closed.
Note that I did not land the change of this pull request, i.e. neqo-transport still does no congestion control reaction before largest_acked. Similar to Martin's concerns above (https://github.com/mozilla/neqo/pull/2117#discussion_r1784493779), I don't see the need for a congestion control reaction. If I read RFC 9002 correctly, our current approach is inline.