fix: Don't condition `neqo_common::log::init` on `cfg(test)`
Turns out we cannot make the initialization conditioned on cfg(test), because before a test uses one of the logging macros, the log init won't happen and there will no output :-(
Fixes #2368
Codecov Report
:white_check_mark: All modified and coverable lines are covered by tests.
:white_check_mark: Project coverage is 93.34%. Comparing base (3d2948d) to head (551d49d).
:warning: Report is 153 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #3024 +/- ##
==========================================
- Coverage 93.37% 93.34% -0.04%
==========================================
Files 123 123
Lines 35887 35887
Branches 35887 35887
==========================================
- Hits 33511 33500 -11
- Misses 1533 1545 +12
+ Partials 843 842 -1
| Components | Coverage Δ | |
|---|---|---|
| neqo-common | 97.41% <ø> (ø) |
|
| neqo-crypto | 83.30% <ø> (-0.44%) |
:arrow_down: |
| neqo-http3 | 93.34% <ø> (ø) |
|
| neqo-qpack | 94.18% <ø> (ø) |
|
| neqo-transport | 94.40% <ø> (-0.02%) |
:arrow_down: |
| neqo-udp | 79.32% <ø> (+0.48%) |
:arrow_up: |
| mtu | 85.57% <ø> (-0.20%) |
:arrow_down: |
Bencher Report
| Branch | fix-2368 |
| Testbed | On-prem |
🚨 1 Alert
| Benchmark | Measure Units | View | Benchmark Result (Result Δ%) | Upper Boundary (Limit %) |
|---|---|---|---|---|
| decode 1048576 bytes, mask ff | Latency milliseconds (ms) | 📈 plot 🚷 threshold 🚨 alert (🔔) | 3.05 ms(+0.56%)Baseline: 3.03 ms | 3.04 ms (100.15%) |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result nanoseconds (ns) (Result Δ%) | Upper Boundary nanoseconds (ns) (Limit %) |
|---|---|---|---|
| 1-conn/1-100mb-req/mtu-1504 (aka. Upload)/client | 📈 view plot 🚷 view threshold | 198,850,000.00 ns(-4.83%)Baseline: 208,952,485.88 ns | 218,094,024.51 ns (91.18%) |
| 1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client | 📈 view plot 🚷 view threshold | 194,430,000.00 ns(-4.22%)Baseline: 202,993,220.34 ns | 212,907,214.98 ns (91.32%) |
| 1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client | 📈 view plot 🚷 view threshold | 28,373,000.00 ns(-0.14%)Baseline: 28,412,401.13 ns | 28,867,395.64 ns (98.29%) |
| 1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client | 📈 view plot 🚷 view threshold | 285,470,000.00 ns(-3.14%)Baseline: 294,737,853.11 ns | 306,115,720.70 ns (93.26%) |
| 1-streams/each-1000-bytes/simulated-time | 📈 view plot 🚷 view threshold | 119,190,000.00 ns(+0.74%)Baseline: 118,316,610.17 ns | 120,894,626.36 ns (98.59%) |
| 1-streams/each-1000-bytes/wallclock-time | 📈 view plot 🚷 view threshold | 581,710.00 ns(-2.77%)Baseline: 598,297.97 ns | 623,196.03 ns (93.34%) |
| 1000-streams/each-1-bytes/simulated-time | 📈 view plot 🚷 view threshold | 14,984,000,000.00 ns(-0.05%)Baseline: 14,991,779,661.02 ns | 15,010,287,868.13 ns (99.82%) |
| 1000-streams/each-1-bytes/wallclock-time | 📈 view plot 🚷 view threshold | 13,455,000.00 ns(-5.34%)Baseline: 14,214,672.32 ns | 14,995,822.76 ns (89.72%) |
| 1000-streams/each-1000-bytes/simulated-time | 📈 view plot 🚷 view threshold | 19,081,000,000.00 ns(+0.90%)Baseline: 18,911,203,389.83 ns | 19,161,653,024.67 ns (99.58%) |
| 1000-streams/each-1000-bytes/wallclock-time | 📈 view plot 🚷 view threshold | 46,722,000.00 ns(-10.64%)Baseline: 52,286,022.60 ns | 58,779,071.96 ns (79.49%) |
| RxStreamOrderer::inbound_frame() | 📈 view plot 🚷 view threshold | 109,250,000.00 ns(-0.54%)Baseline: 109,846,214.69 ns | 111,991,379.69 ns (97.55%) |
| coalesce_acked_from_zero 1+1 entries | 📈 view plot 🚷 view threshold | 88.40 ns(-0.26%)Baseline: 88.63 ns | 89.31 ns (98.99%) |
| coalesce_acked_from_zero 10+1 entries | 📈 view plot 🚷 view threshold | 105.85 ns(-0.23%)Baseline: 106.10 ns | 107.09 ns (98.84%) |
| coalesce_acked_from_zero 1000+1 entries | 📈 view plot 🚷 view threshold | 89.07 ns(-0.86%)Baseline: 89.84 ns | 94.47 ns (94.28%) |
| coalesce_acked_from_zero 3+1 entries | 📈 view plot 🚷 view threshold | 106.31 ns(-0.28%)Baseline: 106.61 ns | 107.58 ns (98.82%) |
| decode 1048576 bytes, mask 3f | 📈 view plot 🚷 view threshold | 1,589,600.00 ns(-0.18%)Baseline: 1,592,525.99 ns | 1,599,567.65 ns (99.38%) |
| decode 1048576 bytes, mask 7f | 📈 view plot 🚷 view threshold | 5,067,400.00 ns(+0.20%)Baseline: 5,057,367.23 ns | 5,077,228.06 ns (99.81%) |
| decode 1048576 bytes, mask ff | 📈 view plot 🚷 view threshold 🚨 view alert (🔔) | 3,048,600.00 ns(+0.56%)Baseline: 3,031,712.43 ns | 3,043,907.17 ns (100.15%) |
| decode 4096 bytes, mask 3f | 📈 view plot 🚷 view threshold | 8,322.60 ns(+0.31%)Baseline: 8,296.76 ns | 8,344.71 ns (99.74%) |
| decode 4096 bytes, mask 7f | 📈 view plot 🚷 view threshold | 20,020.00 ns(+0.06%)Baseline: 20,007.73 ns | 20,086.29 ns (99.67%) |
| decode 4096 bytes, mask ff | 📈 view plot 🚷 view threshold | 11,636.00 ns(-0.82%)Baseline: 11,731.87 ns | 11,977.12 ns (97.15%) |
| sent::Packets::take_ranges | 📈 view plot 🚷 view threshold | 4,694.10 ns(-1.11%)Baseline: 4,746.66 ns | 4,988.79 ns (94.09%) |
| transfer/pacing-false/same-seed/simulated-time/run | 📈 view plot 🚷 view threshold | 25,710,000,000.00 ns(+1.83%)Baseline: 25,247,657,142.86 ns | 25,741,436,848.78 ns (99.88%) |
| transfer/pacing-false/same-seed/wallclock-time/run | 📈 view plot 🚷 view threshold | 24,897,000.00 ns(-4.32%)Baseline: 26,020,537.14 ns | 27,093,384.12 ns (91.89%) |
| transfer/pacing-false/varying-seeds/simulated-time/run | 📈 view plot 🚷 view threshold | 25,151,000,000.00 ns(-0.06%)Baseline: 25,166,611,428.57 ns | 25,211,284,446.62 ns (99.76%) |
| transfer/pacing-false/varying-seeds/wallclock-time/run | 📈 view plot 🚷 view threshold | 24,820,000.00 ns(-5.49%)Baseline: 26,260,405.71 ns | 27,622,976.93 ns (89.85%) |
| transfer/pacing-true/same-seed/simulated-time/run | 📈 view plot 🚷 view threshold | 25,675,000,000.00 ns(+0.28%)Baseline: 25,602,914,285.71 ns | 25,679,901,444.16 ns (99.98%) |
| transfer/pacing-true/same-seed/wallclock-time/run | 📈 view plot 🚷 view threshold | 25,810,000.00 ns(-5.97%)Baseline: 27,449,417.14 ns | 28,793,808.73 ns (89.64%) |
| transfer/pacing-true/varying-seeds/simulated-time/run | 📈 view plot 🚷 view threshold | 24,972,000,000.00 ns(-0.09%)Baseline: 24,993,925,714.29 ns | 25,043,552,461.10 ns (99.71%) |
| transfer/pacing-true/varying-seeds/wallclock-time/run | 📈 view plot 🚷 view threshold | 25,227,000.00 ns(-5.84%)Baseline: 26,791,857.14 ns | 28,187,846.96 ns (89.50%) |
Bencher Report
| Branch | fix-2368 |
| Testbed | On-prem |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| google vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 274.52 ms(-1.11%)Baseline: 277.61 ms | 280.33 ms (97.93%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| msquic vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 190.43 ms(-2.24%)Baseline: 194.80 ms | 229.38 ms (83.02%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. google (cubic, paced) | 📈 view plot 🚷 view threshold | 753.66 ms(-0.53%)Baseline: 757.66 ms | 764.82 ms (98.54%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. msquic (cubic, paced) | 📈 view plot 🚷 view threshold | 156.37 ms(-0.37%)Baseline: 156.95 ms | 158.88 ms (98.42%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. neqo (cubic) | 📈 view plot 🚷 view threshold | 91.03 ms(+0.05%)Baseline: 90.98 ms | 94.71 ms (96.12%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 92.32 ms(+0.06%)Baseline: 92.27 ms | 95.73 ms (96.44%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. neqo (reno) | 📈 view plot 🚷 view threshold | 88.88 ms(-2.23%)Baseline: 90.91 ms | 94.20 ms (94.35%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. neqo (reno, paced) | 📈 view plot 🚷 view threshold | 94.30 ms(+2.23%)Baseline: 92.24 ms | 95.54 ms (98.70%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. quiche (cubic, paced) | 📈 view plot 🚷 view threshold | 195.17 ms(+0.64%)Baseline: 193.92 ms | 197.38 ms (98.88%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. s2n (cubic, paced) | 📈 view plot 🚷 view threshold | 221.58 ms(+0.31%)Baseline: 220.89 ms | 223.60 ms (99.09%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| quiche vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 152.11 ms(-0.00%)Baseline: 152.11 ms | 157.80 ms (96.39%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| s2n vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 174.97 ms(+0.72%)Baseline: 173.72 ms | 177.83 ms (98.39%) |
Client/server transfer results
Performance differences relative to 3a429ef8b795dc54dfa59c4edb435943bcf59439.
Transfer of 33554432 bytes over loopback, min. 100 runs. All unit-less numbers are in milliseconds.
| Client vs. server (params) | Mean ± σ | Min | Max | MiB/s ± σ | Δ main |
Δ main |
|---|---|---|---|---|---|---|
| google vs. google | 451.8 ± 3.5 | 445.0 | 464.0 | 70.8 ± 9.1 | ||
| google vs. neqo (cubic, paced) | 274.5 ± 3.8 | 269.4 | 282.5 | 116.6 ± 8.4 | :green_heart: -1.6 | -0.6% |
| msquic vs. msquic | 165.3 ± 39.4 | 131.6 | 405.1 | 193.6 ± 0.8 | ||
| msquic vs. neqo (cubic, paced) | 190.4 ± 35.6 | 152.5 | 413.0 | 168.0 ± 0.9 | 4.3 | 2.3% |
| neqo vs. google (cubic, paced) | 753.7 ± 7.4 | 745.8 | 815.2 | 42.5 ± 4.3 | 0.3 | 0.0% |
| neqo vs. msquic (cubic, paced) | 156.4 ± 4.6 | 150.6 | 164.9 | 204.6 ± 7.0 | -0.6 | -0.4% |
| neqo vs. neqo (cubic) | 91.0 ± 4.3 | 84.5 | 98.3 | 351.5 ± 7.4 | :broken_heart: 1.7 | 1.9% |
| neqo vs. neqo (cubic, paced) | 92.3 ± 4.3 | 85.7 | 103.3 | 346.6 ± 7.4 | :broken_heart: 2.8 | 3.1% |
| neqo vs. neqo (reno) | 88.9 ± 3.8 | 83.3 | 100.7 | 360.0 ± 8.4 | 0.4 | 0.4% |
| neqo vs. neqo (reno, paced) | 94.3 ± 4.5 | 84.7 | 101.3 | 339.3 ± 7.1 | :broken_heart: 3.6 | 4.0% |
| neqo vs. quiche (cubic, paced) | 195.2 ± 4.9 | 186.5 | 214.2 | 164.0 ± 6.5 | :green_heart: -2.0 | -1.0% |
| neqo vs. s2n (cubic, paced) | 221.6 ± 4.0 | 213.6 | 229.3 | 144.4 ± 8.0 | 1.0 | 0.5% |
| quiche vs. neqo (cubic, paced) | 152.1 ± 5.1 | 142.4 | 169.7 | 210.4 ± 6.3 | :green_heart: -3.1 | -2.0% |
| quiche vs. quiche | 142.1 ± 5.1 | 134.8 | 157.6 | 225.1 ± 6.3 | ||
| s2n vs. neqo (cubic, paced) | 175.0 ± 4.3 | 165.4 | 183.7 | 182.9 ± 7.4 | :broken_heart: 2.9 | 1.7% |
| s2n vs. s2n | 249.6 ± 25.8 | 233.0 | 348.1 | 128.2 ± 1.2 |
Download data for profiler.firefox.com or download performance comparison data.
Turns out we cannot make the initialization conditioned on
cfg(test), because before a test uses one of the logging macros, the log init won't happen and there will no output :-(
Despite what I said #2368, I don't fully understand the above.
Are you saying that e.g. a qdebug in non-test code won't trigger unless there is a qdebug in test code as well? Why is that? cfg(test) is a compile time option, thus should apply to the whole binary, i.e. both non-test and test code.
Without cfg(test) we would as well initialize in Firefox, correct? Is that safe?
The issue is that the log init call in the qdebug macro does not execute in test code.
Why is that? The cfg(test) should be true in that case, no?
I'm not quite sure - maybe due to how macro expansion and feature checks intersect?
I think we need to get to the root of this before merging. Especially since this might interfere with Firefox's logging setup.
@claude says
The issue is that #[cfg(test)] doesn't cross crate boundaries in macros.
The solution: Use #[cfg(debug_assertions)] instead of #[cfg(any(test, feature = "bench"))].
This works because:
1. ✅ debug_assertions is evaluated at the macro expansion site (where it's used), not where it's defined
2. ✅ It's enabled by default for test builds (cargo test)
3. ✅ It's enabled for debug builds (cargo build)
4. ✅ It's disabled for release builds (cargo build --release), avoiding production overhead
5. ✅ For benchmarks, you can use cargo bench --profile=dev or --features bench to enable it when needed
Failed Interop Tests
QUIC Interop Runner, client vs. server
neqo-latest as client
- neqo-latest vs. go-x-net: :warning:BP BA
- neqo-latest vs. haproxy: :warning:L1 C1 BP BA
- neqo-latest vs. kwik: :warning:BP BA
- neqo-latest vs. linuxquic: :warning:L1 C1
- neqo-latest vs. lsquic: :warning:E L1 C1
- neqo-latest vs. msquic: :warning:R Z A L1 C1
- neqo-latest vs. mvfst: :warning:A L1 C1
- neqo-latest vs. neqo: :warning:A
- neqo-latest vs. neqo-latest: :warning:A
- neqo-latest vs. nginx: :warning:BP BA
- neqo-latest vs. ngtcp2: :warning:E CM
- neqo-latest vs. picoquic: :warning:Z E A
- neqo-latest vs. quic-go: :warning:A
- neqo-latest vs. quiche: :warning:BP BA
- neqo-latest vs. quinn: :warning:A
- neqo-latest vs. s2n-quic: :warning:E BA CM
- neqo-latest vs. tquic: :warning:S A BP BA
- neqo-latest vs. xquic: :warning:H DC LR C20 M R Z 3 B U A L1 L2 C1 C2 6 BP BA
neqo-latest as server
- aioquic vs. neqo-latest: :warning:CM
- go-x-net vs. neqo-latest: :warning:CM
- kwik vs. neqo-latest: :warning:BP BA CM
- lsquic vs. neqo-latest: :warning:BA
- msquic vs. neqo-latest: :warning:U CM
- mvfst vs. neqo-latest: :warning:Z A L1 C1 CM
- neqo vs. neqo-latest: :warning:A
- openssl vs. neqo-latest: :warning:LR M A CM
- quic-go vs. neqo-latest: run cancelled after 20 min
- quiche vs. neqo-latest: :warning:C1 CM
- quinn vs. neqo-latest: :warning:V2 CM
- s2n-quic vs. neqo-latest: :warning:CM
- tquic vs. neqo-latest: :warning:CM
- xquic vs. neqo-latest: :warning:M CM
All results
Succeeded Interop Tests
QUIC Interop Runner, client vs. server
neqo-latest as client
- neqo-latest vs. aioquic: :rocket:~~H DC LR C20 M S R Z 3 B U A L1 L2 C1 C2 6 V2 BP BA~~
- neqo-latest vs. go-x-net: :rocket:~~H DC LR M B U A L2 C2 6~~
- neqo-latest vs. haproxy: :rocket:~~H DC LR C20 M S R Z 3 B U A L2 C2 6 V2~~
- neqo-latest vs. kwik: :rocket:~~H DC LR C20 M S R Z 3 B U A L1 L2 C1 C2 6 V2~~
- neqo-latest vs. linuxquic: :rocket:~~H DC LR C20 M S R Z 3 B U E A L2 C2 6 V2 BP BA CM~~
- neqo-latest vs. lsquic: :rocket:~~H DC LR C20 M S R Z 3 B U A L2 C2 6 V2 BP BA CM~~
- neqo-latest vs. msquic: :rocket:~~H DC LR C20 M S B U L2 C2 6 V2 BP BA~~
- neqo-latest vs. mvfst: :rocket:~~H DC LR M R Z 3 B U L2 C2 6 BP BA~~
- neqo-latest vs. neqo: :rocket:~~H DC LR C20 M S R Z 3 B U E L1 L2 C1 C2 6 V2 BP BA CM~~
- neqo-latest vs. neqo-latest: :rocket:~~H DC LR C20 M S R Z 3 B U E L1 L2 C1 C2 6 V2 BP BA CM~~
- neqo-latest vs. nginx: :rocket:~~H DC LR C20 M S R Z 3 B U A L1 L2 C1 C2 6~~
- neqo-latest vs. ngtcp2: :rocket:~~H DC LR C20 M S R Z 3 B U A L1 L2 C1 C2 6 V2 BP BA~~
- neqo-latest vs. picoquic: :rocket:~~H DC LR C20 M S R 3 B U L1 L2 C1 C2 6 V2 BP BA~~
- neqo-latest vs. quic-go: :rocket:~~H DC LR C20 M S R Z 3 B U L1 L2 C1 C2 6 BP BA~~
- neqo-latest vs. quiche: :rocket:~~H DC LR C20 M S R Z 3 B U A L1 L2 C1 C2 6~~
- neqo-latest vs. quinn: :rocket:~~H DC LR C20 M S R Z 3 B U E L1 L2 C1 C2 6 BP BA~~
- neqo-latest vs. s2n-quic: :rocket:~~H DC LR C20 M S R 3 B U A L1 L2 C1 C2 6 BP~~
- neqo-latest vs. tquic: :rocket:~~H DC LR C20 M R Z 3 B U L1 L2 C1 C2 6~~
neqo-latest as server
- aioquic vs. neqo-latest: :rocket:~~H DC LR C20 M S R Z 3 B U A L1 L2 C1 C2 6 V2 BP BA~~
- chrome vs. neqo-latest: :rocket:~~3~~
- go-x-net vs. neqo-latest: :rocket:~~H DC LR M B U A L2 C2 6 BP BA~~
- kwik vs. neqo-latest: :rocket:~~H DC LR C20 M S R Z 3 B U A L1 L2 C1 C2 6 V2~~
- linuxquic vs. neqo-latest: :rocket:~~H DC LR C20 M S R Z 3 B U E A L1 L2 C1 C2 6 V2 BP BA CM~~
- lsquic vs. neqo-latest: :rocket:~~H DC LR C20 M S R 3 B E A L1 L2 C1 C2 6 V2 BP CM~~
- msquic vs. neqo-latest: :rocket:~~H DC LR C20 M S R Z B A L1 L2 C1 C2 6 V2 BP BA~~
- mvfst vs. neqo-latest: :rocket:~~H DC LR M 3 B L2 C2 6 BP BA~~
- neqo vs. neqo-latest: :rocket:~~H DC LR C20 M S R Z 3 B U E L1 L2 C1 C2 6 V2 BP BA CM~~
- ngtcp2 vs. neqo-latest: :rocket:~~H DC LR C20 M S R Z 3 B U E A L1 L2 C1 C2 6 V2 BP BA CM~~
- openssl vs. neqo-latest: :rocket:~~H DC C20 S R 3 B L2 C2 6 BP BA~~
- picoquic vs. neqo-latest: :rocket:~~H DC LR C20 M S R Z 3 B U E A L1 L2 C1 C2 6 V2 BP BA CM~~
- quiche vs. neqo-latest: :rocket:~~H DC LR M S R Z 3 B A L1 L2 C2 6 BP BA~~
- quinn vs. neqo-latest: :rocket:~~H DC LR C20 M S R Z 3 B U E A L1 L2 C1 C2 6 BP BA~~
- s2n-quic vs. neqo-latest: :rocket:~~H DC LR M S R 3 B E A L1 L2 C1 C2 6 BP BA~~
- tquic vs. neqo-latest: :rocket:~~H DC LR M S R Z 3 B A L1 L2 C1 C2 6 BP BA~~
- xquic vs. neqo-latest: :rocket:~~H DC LR C20 S R Z 3 B U A L1 L2 C1 C2 6 BP BA~~
Unsupported Interop Tests
QUIC Interop Runner, client vs. server
neqo-latest as client
- neqo-latest vs. aioquic: E CM
- neqo-latest vs. go-x-net: C20 S R Z 3 E L1 C1 V2 CM
- neqo-latest vs. haproxy: E CM
- neqo-latest vs. kwik: E CM
- neqo-latest vs. msquic: 3 E CM
- neqo-latest vs. mvfst: C20 S E V2 CM
- neqo-latest vs. nginx: E V2 CM
- neqo-latest vs. picoquic: CM
- neqo-latest vs. quic-go: E V2 CM
- neqo-latest vs. quiche: E V2 CM
- neqo-latest vs. quinn: V2 CM
- neqo-latest vs. s2n-quic: Z V2
- neqo-latest vs. tquic: E V2 CM
- neqo-latest vs. xquic: S E V2 CM
neqo-latest as server
- aioquic vs. neqo-latest: E
- chrome vs. neqo-latest: H DC LR C20 M S R Z B U E A L1 L2 C1 C2 6 V2 BP BA CM
- go-x-net vs. neqo-latest: C20 S R Z 3 E L1 C1 V2
- kwik vs. neqo-latest: E
- lsquic vs. neqo-latest: Z U
- msquic vs. neqo-latest: 3 E
- mvfst vs. neqo-latest: C20 S R U E V2
- openssl vs. neqo-latest: Z U E L1 C1 V2
- quiche vs. neqo-latest: C20 U E V2
- s2n-quic vs. neqo-latest: C20 Z U V2
- tquic vs. neqo-latest: C20 U E V2
- xquic vs. neqo-latest: E V2
Benchmark results
Performance differences relative to 791fd40fb7e9ee4599c07c11695d1849110e704b.
1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client: Change within noise threshold.
time: [194.11 ms 194.43 ms 194.79 ms]
thrpt: [513.37 MiB/s 514.31 MiB/s 515.18 MiB/s]
change:
time: [−0.8271% −0.5944% −0.3714%] (p = 0.00 +0.5980% +0.8340%]
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) high mild
1 (1.00%) high severe
1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client: No change in performance detected.
time: [283.82 ms 285.47 ms 287.12 ms]
thrpt: [34.828 Kelem/s 35.030 Kelem/s 35.234 Kelem/s]
change:
time: [−0.2397% +0.6239% +1.5048%] (p = 0.16 > 0.05)
thrpt: [−1.4825% −0.6200% +0.2403%]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client: No change in performance detected.
time: [28.255 ms 28.373 ms 28.515 ms]
thrpt: [35.069 B/s 35.244 B/s 35.392 B/s]
change:
time: [−0.5035% +0.0113% +0.6117%] (p = 0.97 > 0.05)
thrpt: [−0.6080% −0.0113% +0.5061%]
Found 10 outliers among 100 measurements (10.00%)
3 (3.00%) low severe
1 (1.00%) low mild
1 (1.00%) high mild
5 (5.00%) high severe
1-conn/1-100mb-req/mtu-1504 (aka. Upload)/client: :green_heart: Performance has improved.
time: [198.56 ms 198.85 ms 199.20 ms]
thrpt: [502.02 MiB/s 502.89 MiB/s 503.62 MiB/s]
change:
time: [−2.9775% −2.7104% −2.4454%] (p = 0.00 +2.7859% +3.0689%]
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) high mild
1 (1.00%) high severe
decode 4096 bytes, mask ff: No change in performance detected.
time: [11.601 µs 11.636 µs 11.678 µs]
change: [−0.8050% −0.1766% +0.4169%] (p = 0.58 > 0.05)
Found 12 outliers among 100 measurements (12.00%)
1 (1.00%) low severe
3 (3.00%) low mild
8 (8.00%) high severe
decode 1048576 bytes, mask ff: No change in performance detected.
time: [3.0226 ms 3.0486 ms 3.0918 ms]
change: [−0.3897% +0.5957% +1.9529%] (p = 0.45 > 0.05)
Found 9 outliers among 100 measurements (9.00%)
9 (9.00%) high severe
decode 4096 bytes, mask 7f: No change in performance detected.
time: [19.960 µs 20.020 µs 20.089 µs]
change: [−0.4468% −0.0210% +0.3800%] (p = 0.92 > 0.05)
Found 17 outliers among 100 measurements (17.00%)
1 (1.00%) low severe
3 (3.00%) low mild
13 (13.00%) high severe
decode 1048576 bytes, mask 7f: No change in performance detected.
time: [5.0506 ms 5.0674 ms 5.0885 ms]
change: [−0.0518% +0.3531% +0.8368%] (p = 0.11 > 0.05)
Found 15 outliers among 100 measurements (15.00%)
15 (15.00%) high severe
decode 4096 bytes, mask 3f: No change in performance detected.
time: [8.2812 µs 8.3226 µs 8.3681 µs]
change: [−0.5375% −0.0162% +0.4945%] (p = 0.95 > 0.05)
Found 20 outliers among 100 measurements (20.00%)
7 (7.00%) low mild
3 (3.00%) high mild
10 (10.00%) high severe
decode 1048576 bytes, mask 3f: No change in performance detected.
time: [1.5854 ms 1.5896 ms 1.5952 ms]
change: [−0.6720% −0.1488% +0.3072%] (p = 0.60 > 0.05)
Found 5 outliers among 100 measurements (5.00%)
5 (5.00%) high severe
1-streams/each-1000-bytes/wallclock-time: Change within noise threshold.
time: [580.04 µs 581.71 µs 583.67 µs]
change: [−1.1229% −0.6642% −0.2075%] (p = 0.00 Found 6 outliers among 100 measurements (6.00%)
1 (1.00%) high mild
5 (5.00%) high severe
1-streams/each-1000-bytes/simulated-time
time: [119.00 ms 119.19 ms 119.38 ms]
thrpt: [8.1801 KiB/s 8.1932 KiB/s 8.2063 KiB/s]
change:
time: [−0.0379% +0.1988% +0.4416%] (p = 0.11 > 0.05)
thrpt: [−0.4397% −0.1984% +0.0380%]
No change in performance detected.1000-streams/each-1-bytes/wallclock-time: :green_heart: Performance has improved.
time: [13.432 ms 13.455 ms 13.478 ms]
change: [−2.0056% −1.7634% −1.5024%] (p = 0.00 Found 2 outliers among 100 measurements (2.00%)
2 (2.00%) high mild
1000-streams/each-1-bytes/simulated-time
time: [14.971 s 14.984 s 14.997 s]
thrpt: [66.681 B/s 66.737 B/s 66.794 B/s]
change:
time: [−0.1352% −0.0144% +0.1055%] (p = 0.82 > 0.05)
thrpt: [−0.1054% +0.0144% +0.1354%]
No change in performance detected.1000-streams/each-1000-bytes/wallclock-time: :green_heart: Performance has improved.
time: [46.540 ms 46.722 ms 46.905 ms]
change: [−5.6048% −5.0731% −4.5543%] (p = 0.00 1000-streams/each-1000-bytes/simulated-time: No change in performance detected.
time: [18.894 s 19.081 s 19.272 s]
thrpt: [50.674 KiB/s 51.181 KiB/s 51.686 KiB/s]
change:
time: [−1.2592% +0.1315% +1.5433%] (p = 0.86 > 0.05)
thrpt: [−1.5199% −0.1313% +1.2753%]
Found 2 outliers among 100 measurements (2.00%)
2 (2.00%) high mild
coalesce_acked_from_zero 1+1 entries: No change in performance detected.
time: [88.069 ns 88.403 ns 88.726 ns]
change: [−1.4412% −0.5234% +0.2160%] (p = 0.25 > 0.05)
Found 12 outliers among 100 measurements (12.00%)
11 (11.00%) high mild
1 (1.00%) high severe
coalesce_acked_from_zero 3+1 entries: No change in performance detected.
time: [105.94 ns 106.31 ns 106.70 ns]
change: [−0.2554% +0.1622% +0.5749%] (p = 0.48 > 0.05)
Found 12 outliers among 100 measurements (12.00%)
12 (12.00%) high severe
coalesce_acked_from_zero 10+1 entries: No change in performance detected.
time: [105.41 ns 105.85 ns 106.38 ns]
change: [−0.4891% +0.1632% +0.8445%] (p = 0.65 > 0.05)
Found 8 outliers among 100 measurements (8.00%)
1 (1.00%) high mild
7 (7.00%) high severe
coalesce_acked_from_zero 1000+1 entries: No change in performance detected.
time: [88.960 ns 89.066 ns 89.187 ns]
change: [−1.1926% −0.2136% +0.7273%] (p = 0.68 > 0.05)
Found 11 outliers among 100 measurements (11.00%)
4 (4.00%) high mild
7 (7.00%) high severe
RxStreamOrderer::inbound_frame(): Change within noise threshold.
time: [109.09 ms 109.25 ms 109.50 ms]
change: [−0.5264% −0.3450% −0.0985%] (p = 0.00 Found 20 outliers among 100 measurements (20.00%)
9 (9.00%) low mild
9 (9.00%) high mild
2 (2.00%) high severesent::Packets::take_ranges: No change in performance detected.
time: [4.5620 µs 4.6941 µs 4.8315 µs]
change: [−5.3235% −1.7882% +2.2839%] (p = 0.38 > 0.05)
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
transfer/pacing-false/varying-seeds/wallclock-time/run: Change within noise threshold.
time: [24.786 ms 24.820 ms 24.854 ms]
change: [−0.5891% −0.3749% −0.1598%] (p = 0.00 transfer/pacing-false/varying-seeds/simulated-time/run: Change within noise threshold.
time: [25.121 s 25.151 s 25.181 s]
thrpt: [162.66 KiB/s 162.86 KiB/s 163.05 KiB/s]
change:
time: [−0.4022% −0.2135% −0.0149%] (p = 0.03 +0.2139% +0.4039%]
transfer/pacing-true/varying-seeds/wallclock-time/run: Change within noise threshold.
time: [25.168 ms 25.227 ms 25.287 ms]
change: [−1.2268% −0.8710% −0.5338%] (p = 0.00 transfer/pacing-true/varying-seeds/simulated-time/run: Change within noise threshold.
time: [24.938 s 24.972 s 25.006 s]
thrpt: [163.80 KiB/s 164.02 KiB/s 164.25 KiB/s]
change:
time: [−0.4334% −0.2327% −0.0301%] (p = 0.03 +0.2333% +0.4352%]
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) low mild
transfer/pacing-false/same-seed/wallclock-time/run: Change within noise threshold.
time: [24.864 ms 24.897 ms 24.946 ms]
change: [−2.5814% −2.3501% −2.1333%] (p = 0.00 Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severetransfer/pacing-false/same-seed/simulated-time/run: No change in performance detected.
time: [25.710 s 25.710 s 25.710 s]
thrpt: [159.31 KiB/s 159.31 KiB/s 159.31 KiB/s]
change:
time: [+0.0000% +0.0000% +0.0000%] (p = NaN > 0.05)
thrpt: [+0.0000% +0.0000% +0.0000%]
transfer/pacing-true/same-seed/wallclock-time/run: Change within noise threshold.
time: [25.783 ms 25.810 ms 25.843 ms]
change: [−2.6839% −2.5043% −2.3424%] (p = 0.00 Found 4 outliers among 100 measurements (4.00%)
2 (2.00%) high mild
2 (2.00%) high severetransfer/pacing-true/same-seed/simulated-time/run: No change in performance detected.
time: [25.675 s 25.675 s 25.675 s]
thrpt: [159.53 KiB/s 159.53 KiB/s 159.53 KiB/s]
change:
time: [+0.0000% +0.0000% +0.0000%] (p = NaN > 0.05)
thrpt: [+0.0000% +0.0000% +0.0000%]
Download data for profiler.firefox.com or download performance comparison data.
@claude says
The issue is that #[cfg(test)] doesn't cross crate boundaries in macros. The solution: Use #[cfg(debug_assertions)] instead of #[cfg(any(test, feature = "bench"))]. This works because: 1. ✅ debug_assertions is evaluated at the macro expansion site (where it's used), not where it's defined 2. ✅ It's enabled by default for test builds (cargo test) 3. ✅ It's enabled for debug builds (cargo build) 4. ✅ It's disabled for release builds (cargo build --release), avoiding production overhead 5. ✅ For benchmarks, you can use cargo bench --profile=dev or --features bench to enable it when needed
That makes sense. That leaves us with the question whether cfg(debug_assertions) is compatible with Firefox's logging setup?
Let's not do this. If there are tests that don't generate debug output when we want them to, they just need to call init explicitly.