rust
rust copied to clipboard
Make root vars more stable
Never resolve a ty/ct vid to a higher vid as its root. This should make the optimization in rust-lang/rust#141500 more "stable" when there are a lot of vars flying around.
r? @ghost
@bors2 try @rust-timer queue
Awaiting bors try build completion.
@rustbot label: +S-waiting-on-perf
:hourglass: Trying commit e1567dff243135a84f8e348528da782bee1d13e9 with merge 55fb0af4ed7d14a8bca0f3c87248c6c66fcde13b…
To cancel the try build, run the command @bors2 try cancel.
:sunny: Try build successful (CI)
Build commit: 55fb0af4ed7d14a8bca0f3c87248c6c66fcde13b (55fb0af4ed7d14a8bca0f3c87248c6c66fcde13b)
Queued 55fb0af4ed7d14a8bca0f3c87248c6c66fcde13b with parent 0b20963d6b892651937fb3600e15ca285bdcfefd, future comparison URL. There are currently 2 preceding artifacts in the queue. It will probably take at least ~3.1 hours until the benchmark run finishes.
Finished benchmarking commit (55fb0af4ed7d14a8bca0f3c87248c6c66fcde13b): comparison URL.
Overall result: ❌✅ regressions and improvements - please read the text below
Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.
Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.
@bors rollup=never @rustbot label: -S-waiting-on-perf +perf-regression
Instruction count
This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
- | - | 0 |
| Regressions ❌ (secondary) |
0.6% | [0.6%, 0.6%] | 2 |
| Improvements ✅ (primary) |
-0.1% | [-0.1%, -0.1%] | 1 |
| Improvements ✅ (secondary) |
-0.6% | [-1.3%, -0.1%] | 8 |
| All ❌✅ (primary) | -0.1% | [-0.1%, -0.1%] | 1 |
Max RSS (memory usage)
Results (primary -1.1%, secondary 1.8%)
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
1.5% | [1.5%, 1.5%] | 1 |
| Regressions ❌ (secondary) |
1.8% | [1.1%, 2.5%] | 2 |
| Improvements ✅ (primary) |
-2.4% | [-4.1%, -0.7%] | 2 |
| Improvements ✅ (secondary) |
- | - | 0 |
| All ❌✅ (primary) | -1.1% | [-4.1%, 1.5%] | 3 |
Cycles
This benchmark run did not return any relevant results for this metric.
Binary size
This benchmark run did not return any relevant results for this metric.
Bootstrap: 750.971s -> 751.025s (0.01%) Artifact size: 371.78 MiB -> 371.71 MiB (-0.02%)
Let's try this again
@bors2 try @rust-timer queue
Awaiting bors try build completion.
@rustbot label: +S-waiting-on-perf
:hourglass: Trying commit e1567dff243135a84f8e348528da782bee1d13e9 with merge 8c7ffa9414d872a8693f2144760144310f152c7f…
To cancel the try build, run the command @bors2 try cancel.
:sunny: Try build successful (CI)
Build commit: 8c7ffa9414d872a8693f2144760144310f152c7f (8c7ffa9414d872a8693f2144760144310f152c7f)
Queued 8c7ffa9414d872a8693f2144760144310f152c7f with parent c31cccb7b5cc098b1a8c1794ed38d7fdbec0ccb0, future comparison URL. There is currently 1 preceding artifact in the queue. It will probably take at least ~2.5 hours until the benchmark run finishes.
Finished benchmarking commit (8c7ffa9414d872a8693f2144760144310f152c7f): comparison URL.
Overall result: ✅ improvements - no action needed
Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.
@bors rollup=never @rustbot label: -S-waiting-on-perf -perf-regression
Instruction count
This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
- | - | 0 |
| Regressions ❌ (secondary) |
- | - | 0 |
| Improvements ✅ (primary) |
- | - | 0 |
| Improvements ✅ (secondary) |
-0.9% | [-1.5%, -0.4%] | 14 |
| All ❌✅ (primary) | - | - | 0 |
Max RSS (memory usage)
Results (primary 3.1%, secondary -5.8%)
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
3.1% | [2.0%, 4.3%] | 2 |
| Regressions ❌ (secondary) |
- | - | 0 |
| Improvements ✅ (primary) |
- | - | 0 |
| Improvements ✅ (secondary) |
-5.8% | [-8.9%, -2.7%] | 2 |
| All ❌✅ (primary) | 3.1% | [2.0%, 4.3%] | 2 |
Cycles
This benchmark run did not return any relevant results for this metric.
Binary size
This benchmark run did not return any relevant results for this metric.
Bootstrap: 751.176s -> 750.384s (-0.11%) Artifact size: 372.27 MiB -> 372.21 MiB (-0.02%)
I don't see why we shouldn't land this, since I kinda like the invariant that we always equate a two vars with the root being its lowest vid. But it's also basically useless today, so I could see us tabling this for the future too.
r? lcnr
it's still a minor performance improvements and I agree that this change is desirable regardless of perf.
It also feels better wrt to fudging and what not
@bors r+ rollup=never
:pushpin: Commit e1567dff243135a84f8e348528da782bee1d13e9 has been approved by lcnr
It is now in the queue for this repository.
:hourglass: Testing commit e1567dff243135a84f8e348528da782bee1d13e9 with merge 2b0274c71dba0e24370ebf65593da450e2e91868...
:sunny: Test successful - checks-actions Approved by: lcnr Pushing 2b0274c71dba0e24370ebf65593da450e2e91868 to master...
What is this?
This is an experimental post-merge analysis report that shows differences in test outcomes between the merged PR and its parent PR.Comparing 1c047506f94cd2d05228eb992b0a6bbed1942349 (parent) -> 2b0274c71dba0e24370ebf65593da450e2e91868 (this PR)
Test differences
No test diffs found
Test dashboard
Run
cargo run --manifest-path src/ci/citool/Cargo.toml -- \
test-dashboard 2b0274c71dba0e24370ebf65593da450e2e91868 --output-dir test-dashboard
And then open test-dashboard/index.html in your browser to see an overview of all executed tests.
Job duration changes
- dist-apple-various: 7994.5s -> 6762.3s (-15.4%)
- mingw-check-tidy: 75.9s -> 66.6s (-12.2%)
- aarch64-gnu: 6304.3s -> 6799.1s (7.8%)
- mingw-check-1: 1835.7s -> 1968.4s (7.2%)
- x86_64-apple-1: 6861.4s -> 7286.4s (6.2%)
- armhf-gnu: 4779.0s -> 5021.3s (5.1%)
- dist-x86_64-musl: 7357.6s -> 7010.5s (-4.7%)
- dist-i686-mingw: 7754.7s -> 8114.6s (4.6%)
- i686-gnu-1: 7884.8s -> 8247.0s (4.6%)
- dist-loongarch64-musl: 4776.3s -> 4984.1s (4.4%)
How to interpret the job duration changes?
Job durations can vary a lot, based on the actual runner instance that executed the job, system noise, invalidated caches, etc. The table above is provided mostly for t-infra members, for simpler debugging of potential CI slow-downs.
Finished benchmarking commit (2b0274c71dba0e24370ebf65593da450e2e91868): comparison URL.
Overall result: ❌✅ regressions and improvements - please read the text below
Our benchmarks found a performance regression caused by this PR. This might be an actual regression, but it can also be just noise.
Next Steps:
- If the regression was expected or you think it can be justified,
please write a comment with sufficient written justification, and add
@rustbot label: +perf-regression-triagedto it, to mark the regression as triaged. - If you think that you know of a way to resolve the regression, try to create a new PR with a fix for the regression.
- If you do not understand the regression or you think that it is just noise,
you can ask the
@rust-lang/wg-compiler-performanceworking group for help (members of this group were already notified of this PR).
@rustbot label: +perf-regression cc @rust-lang/wg-compiler-performance
Instruction count
Our most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
0.1% | [0.1%, 0.1%] | 1 |
| Regressions ❌ (secondary) |
0.3% | [0.3%, 0.3%] | 2 |
| Improvements ✅ (primary) |
- | - | 0 |
| Improvements ✅ (secondary) |
-0.9% | [-1.2%, -0.4%] | 9 |
| All ❌✅ (primary) | 0.1% | [0.1%, 0.1%] | 1 |
Max RSS (memory usage)
Results (primary -1.5%, secondary 4.1%)
A less reliable metric. May be of interest, but not used to determine the overall result above.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
- | - | 0 |
| Regressions ❌ (secondary) |
6.6% | [3.6%, 9.7%] | 2 |
| Improvements ✅ (primary) |
-1.5% | [-1.5%, -1.5%] | 1 |
| Improvements ✅ (secondary) |
-0.9% | [-0.9%, -0.9%] | 1 |
| All ❌✅ (primary) | -1.5% | [-1.5%, -1.5%] | 1 |
Cycles
Results (secondary -2.7%)
A less reliable metric. May be of interest, but not used to determine the overall result above.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
- | - | 0 |
| Regressions ❌ (secondary) |
- | - | 0 |
| Improvements ✅ (primary) |
- | - | 0 |
| Improvements ✅ (secondary) |
-2.7% | [-2.7%, -2.7%] | 1 |
| All ❌✅ (primary) | - | - | 0 |
Binary size
This benchmark run did not return any relevant results for this metric.
Bootstrap: 756.123s -> 754.155s (-0.26%) Artifact size: 372.14 MiB -> 372.17 MiB (0.01%)
The single regression on a primary benchmark is a doc build and it's super tiny, the rest are tiny improvements.
@rustbot label: +perf-regression-triaged