rust icon indicating copy to clipboard operation
rust copied to clipboard

Make root vars more stable

Open compiler-errors opened this issue 5 months ago • 15 comments
trafficstars

Never resolve a ty/ct vid to a higher vid as its root. This should make the optimization in rust-lang/rust#141500 more "stable" when there are a lot of vars flying around.

r? @ghost

compiler-errors avatar Jun 05 '25 17:06 compiler-errors

@bors2 try @rust-timer queue

compiler-errors avatar Jun 05 '25 17:06 compiler-errors

Awaiting bors try build completion.

@rustbot label: +S-waiting-on-perf

rust-timer avatar Jun 05 '25 17:06 rust-timer

:hourglass: Trying commit e1567dff243135a84f8e348528da782bee1d13e9 with merge 55fb0af4ed7d14a8bca0f3c87248c6c66fcde13b…

To cancel the try build, run the command @bors2 try cancel.

rust-bors[bot] avatar Jun 05 '25 17:06 rust-bors[bot]

:sunny: Try build successful (CI) Build commit: 55fb0af4ed7d14a8bca0f3c87248c6c66fcde13b (55fb0af4ed7d14a8bca0f3c87248c6c66fcde13b)

rust-bors[bot] avatar Jun 05 '25 19:06 rust-bors[bot]

Queued 55fb0af4ed7d14a8bca0f3c87248c6c66fcde13b with parent 0b20963d6b892651937fb3600e15ca285bdcfefd, future comparison URL. There are currently 2 preceding artifacts in the queue. It will probably take at least ~3.1 hours until the benchmark run finishes.

rust-timer avatar Jun 05 '25 19:06 rust-timer

Finished benchmarking commit (55fb0af4ed7d14a8bca0f3c87248c6c66fcde13b): comparison URL.

Overall result: ❌✅ regressions and improvements - please read the text below

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never @rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
0.6% [0.6%, 0.6%] 2
Improvements ✅
(primary)
-0.1% [-0.1%, -0.1%] 1
Improvements ✅
(secondary)
-0.6% [-1.3%, -0.1%] 8
All ❌✅ (primary) -0.1% [-0.1%, -0.1%] 1

Max RSS (memory usage)

Results (primary -1.1%, secondary 1.8%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
1.5% [1.5%, 1.5%] 1
Regressions ❌
(secondary)
1.8% [1.1%, 2.5%] 2
Improvements ✅
(primary)
-2.4% [-4.1%, -0.7%] 2
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) -1.1% [-4.1%, 1.5%] 3

Cycles

This benchmark run did not return any relevant results for this metric.

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 750.971s -> 751.025s (0.01%) Artifact size: 371.78 MiB -> 371.71 MiB (-0.02%)

rust-timer avatar Jun 06 '25 00:06 rust-timer

Let's try this again

@bors2 try @rust-timer queue

compiler-errors avatar Jun 09 '25 03:06 compiler-errors

Awaiting bors try build completion.

@rustbot label: +S-waiting-on-perf

rust-timer avatar Jun 09 '25 03:06 rust-timer

:hourglass: Trying commit e1567dff243135a84f8e348528da782bee1d13e9 with merge 8c7ffa9414d872a8693f2144760144310f152c7f…

To cancel the try build, run the command @bors2 try cancel.

rust-bors[bot] avatar Jun 09 '25 03:06 rust-bors[bot]

:sunny: Try build successful (CI) Build commit: 8c7ffa9414d872a8693f2144760144310f152c7f (8c7ffa9414d872a8693f2144760144310f152c7f)

rust-bors[bot] avatar Jun 09 '25 05:06 rust-bors[bot]

Queued 8c7ffa9414d872a8693f2144760144310f152c7f with parent c31cccb7b5cc098b1a8c1794ed38d7fdbec0ccb0, future comparison URL. There is currently 1 preceding artifact in the queue. It will probably take at least ~2.5 hours until the benchmark run finishes.

rust-timer avatar Jun 09 '25 05:06 rust-timer

Finished benchmarking commit (8c7ffa9414d872a8693f2144760144310f152c7f): comparison URL.

Overall result: ✅ improvements - no action needed

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

@bors rollup=never @rustbot label: -S-waiting-on-perf -perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-0.9% [-1.5%, -0.4%] 14
All ❌✅ (primary) - - 0

Max RSS (memory usage)

Results (primary 3.1%, secondary -5.8%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
3.1% [2.0%, 4.3%] 2
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-5.8% [-8.9%, -2.7%] 2
All ❌✅ (primary) 3.1% [2.0%, 4.3%] 2

Cycles

This benchmark run did not return any relevant results for this metric.

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 751.176s -> 750.384s (-0.11%) Artifact size: 372.27 MiB -> 372.21 MiB (-0.02%)

rust-timer avatar Jun 09 '25 08:06 rust-timer

I don't see why we shouldn't land this, since I kinda like the invariant that we always equate a two vars with the root being its lowest vid. But it's also basically useless today, so I could see us tabling this for the future too.

r? lcnr

compiler-errors avatar Jun 09 '25 17:06 compiler-errors

it's still a minor performance improvements and I agree that this change is desirable regardless of perf.

It also feels better wrt to fudging and what not

@bors r+ rollup=never

lcnr avatar Jun 10 '25 09:06 lcnr

:pushpin: Commit e1567dff243135a84f8e348528da782bee1d13e9 has been approved by lcnr

It is now in the queue for this repository.

bors avatar Jun 10 '25 09:06 bors

:hourglass: Testing commit e1567dff243135a84f8e348528da782bee1d13e9 with merge 2b0274c71dba0e24370ebf65593da450e2e91868...

bors avatar Jun 11 '25 03:06 bors

:sunny: Test successful - checks-actions Approved by: lcnr Pushing 2b0274c71dba0e24370ebf65593da450e2e91868 to master...

bors avatar Jun 11 '25 06:06 bors

What is this? This is an experimental post-merge analysis report that shows differences in test outcomes between the merged PR and its parent PR.

Comparing 1c047506f94cd2d05228eb992b0a6bbed1942349 (parent) -> 2b0274c71dba0e24370ebf65593da450e2e91868 (this PR)

Test differences

No test diffs found

Test dashboard

Run

cargo run --manifest-path src/ci/citool/Cargo.toml -- \
    test-dashboard 2b0274c71dba0e24370ebf65593da450e2e91868 --output-dir test-dashboard

And then open test-dashboard/index.html in your browser to see an overview of all executed tests.

Job duration changes

  1. dist-apple-various: 7994.5s -> 6762.3s (-15.4%)
  2. mingw-check-tidy: 75.9s -> 66.6s (-12.2%)
  3. aarch64-gnu: 6304.3s -> 6799.1s (7.8%)
  4. mingw-check-1: 1835.7s -> 1968.4s (7.2%)
  5. x86_64-apple-1: 6861.4s -> 7286.4s (6.2%)
  6. armhf-gnu: 4779.0s -> 5021.3s (5.1%)
  7. dist-x86_64-musl: 7357.6s -> 7010.5s (-4.7%)
  8. dist-i686-mingw: 7754.7s -> 8114.6s (4.6%)
  9. i686-gnu-1: 7884.8s -> 8247.0s (4.6%)
  10. dist-loongarch64-musl: 4776.3s -> 4984.1s (4.4%)
How to interpret the job duration changes?

Job durations can vary a lot, based on the actual runner instance that executed the job, system noise, invalidated caches, etc. The table above is provided mostly for t-infra members, for simpler debugging of potential CI slow-downs.

github-actions[bot] avatar Jun 11 '25 06:06 github-actions[bot]

Finished benchmarking commit (2b0274c71dba0e24370ebf65593da450e2e91868): comparison URL.

Overall result: ❌✅ regressions and improvements - please read the text below

Our benchmarks found a performance regression caused by this PR. This might be an actual regression, but it can also be just noise.

Next Steps:

  • If the regression was expected or you think it can be justified, please write a comment with sufficient written justification, and add @rustbot label: +perf-regression-triaged to it, to mark the regression as triaged.
  • If you think that you know of a way to resolve the regression, try to create a new PR with a fix for the regression.
  • If you do not understand the regression or you think that it is just noise, you can ask the @rust-lang/wg-compiler-performance working group for help (members of this group were already notified of this PR).

@rustbot label: +perf-regression cc @rust-lang/wg-compiler-performance

Instruction count

Our most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.

mean range count
Regressions ❌
(primary)
0.1% [0.1%, 0.1%] 1
Regressions ❌
(secondary)
0.3% [0.3%, 0.3%] 2
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-0.9% [-1.2%, -0.4%] 9
All ❌✅ (primary) 0.1% [0.1%, 0.1%] 1

Max RSS (memory usage)

Results (primary -1.5%, secondary 4.1%)

A less reliable metric. May be of interest, but not used to determine the overall result above.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
6.6% [3.6%, 9.7%] 2
Improvements ✅
(primary)
-1.5% [-1.5%, -1.5%] 1
Improvements ✅
(secondary)
-0.9% [-0.9%, -0.9%] 1
All ❌✅ (primary) -1.5% [-1.5%, -1.5%] 1

Cycles

Results (secondary -2.7%)

A less reliable metric. May be of interest, but not used to determine the overall result above.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-2.7% [-2.7%, -2.7%] 1
All ❌✅ (primary) - - 0

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 756.123s -> 754.155s (-0.26%) Artifact size: 372.14 MiB -> 372.17 MiB (0.01%)

rust-timer avatar Jun 11 '25 11:06 rust-timer

The single regression on a primary benchmark is a doc build and it's super tiny, the rest are tiny improvements.

@rustbot label: +perf-regression-triaged

Kobzol avatar Jun 17 '25 06:06 Kobzol