Use `iter::repeat_n` to implement `Vec::extend_with`
This replaces the Vec::extend_with manual implementation with iter::repeat_n, simplifying the code and fixing the issue identified in rust-lang/rust#120050.
r? @Noratrieb
rustbot has assigned @Noratrieb. They will have a look at your PR within the next two weeks and either review your PR or reassign to another reviewer.
Use r? to explicitly pick a reviewer
Let's see what effect this has in the compiler! @bors try @rust-timer queue
Awaiting bors try build completion.
@rustbot label: +S-waiting-on-perf
:hourglass: Trying commit 4bf70721bef3bec2c01ded28d65b0f110a8f14e6 with merge 243c5cc6905efbdf778394d96f4b83701962964f...
given that this branch doesn't look good without #130887, let's block it on that. @rustbot blocked
:sunny: Try build successful - checks-actions
Build commit: 243c5cc6905efbdf778394d96f4b83701962964f (243c5cc6905efbdf778394d96f4b83701962964f)
Queued 243c5cc6905efbdf778394d96f4b83701962964f with parent 76f3ff605962d7046bc1537597ceed5e12325f54, future comparison URL. There is currently 1 preceding artifact in the queue. It will probably take at least ~2.4 hours until the benchmark run finishes.
Finished benchmarking commit (243c5cc6905efbdf778394d96f4b83701962964f): comparison URL.
Overall result: ❌✅ regressions and improvements - please read the text below
Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.
Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.
@bors rollup=never @rustbot label: -S-waiting-on-perf +perf-regression
Instruction count
This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
0.3% | [0.1%, 0.7%] | 16 |
| Regressions ❌ (secondary) |
0.4% | [0.1%, 0.6%] | 9 |
| Improvements ✅ (primary) |
-0.9% | [-0.9%, -0.9%] | 1 |
| Improvements ✅ (secondary) |
-1.3% | [-1.8%, -0.5%] | 8 |
| All ❌✅ (primary) | 0.3% | [-0.9%, 0.7%] | 17 |
Max RSS (memory usage)
Results (primary 0.1%)
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
5.6% | [0.9%, 11.4%] | 3 |
| Regressions ❌ (secondary) |
- | - | 0 |
| Improvements ✅ (primary) |
-8.1% | [-11.1%, -5.0%] | 2 |
| Improvements ✅ (secondary) |
- | - | 0 |
| All ❌✅ (primary) | 0.1% | [-11.1%, 11.4%] | 5 |
Cycles
Results (secondary -2.1%)
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
- | - | 0 |
| Regressions ❌ (secondary) |
- | - | 0 |
| Improvements ✅ (primary) |
- | - | 0 |
| Improvements ✅ (secondary) |
-2.1% | [-2.1%, -2.1%] | 1 |
| All ❌✅ (primary) | - | - | 0 |
Binary size
Results (primary 0.4%)
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
0.5% | [0.0%, 1.3%] | 43 |
| Regressions ❌ (secondary) |
- | - | 0 |
| Improvements ✅ (primary) |
-0.1% | [-0.2%, -0.0%] | 7 |
| Improvements ✅ (secondary) |
- | - | 0 |
| All ❌✅ (primary) | 0.4% | [-0.2%, 1.3%] | 50 |
Bootstrap: 773.805s -> 774.205s (0.05%) Artifact size: 332.32 MiB -> 331.64 MiB (-0.20%)
Could we do another perf run? I've cherry-picked the commit from #130887 since, at least via Compiler Explorer, the codegen looked better
Sure! @bors try @rust-timer queue
Awaiting bors try build completion.
@rustbot label: +S-waiting-on-perf
:hourglass: Trying commit b080e187dbc94ac64e0ee15f12be468e24704387 with merge 78d5ed40d7c459a7e5ea4cb6bea9918a493a800f...
:sunny: Try build successful - checks-actions
Build commit: 78d5ed40d7c459a7e5ea4cb6bea9918a493a800f (78d5ed40d7c459a7e5ea4cb6bea9918a493a800f)
Queued 78d5ed40d7c459a7e5ea4cb6bea9918a493a800f with parent 6c76ed5503966c39381fac64eb905ac45e346695, future comparison URL. There are currently 0 preceding artifacts in the queue. It will probably take at least ~1.4 hours until the benchmark run finishes.
Finished benchmarking commit (78d5ed40d7c459a7e5ea4cb6bea9918a493a800f): comparison URL.
Overall result: ❌ regressions - please read the text below
Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.
Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.
@bors rollup=never @rustbot label: -S-waiting-on-perf +perf-regression
Instruction count
This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
0.4% | [0.1%, 0.7%] | 17 |
| Regressions ❌ (secondary) |
1.3% | [0.1%, 3.2%] | 16 |
| Improvements ✅ (primary) |
- | - | 0 |
| Improvements ✅ (secondary) |
- | - | 0 |
| All ❌✅ (primary) | 0.4% | [0.1%, 0.7%] | 17 |
Max RSS (memory usage)
Results (primary -5.6%, secondary 0.3%)
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
5.3% | [5.3%, 5.3%] | 1 |
| Regressions ❌ (secondary) |
1.5% | [1.0%, 1.8%] | 3 |
| Improvements ✅ (primary) |
-11.1% | [-15.5%, -6.8%] | 2 |
| Improvements ✅ (secondary) |
-3.2% | [-3.2%, -3.2%] | 1 |
| All ❌✅ (primary) | -5.6% | [-15.5%, 5.3%] | 3 |
Cycles
Results (primary -1.5%, secondary 35.1%)
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
0.9% | [0.9%, 0.9%] | 1 |
| Regressions ❌ (secondary) |
35.1% | [33.0%, 39.1%] | 6 |
| Improvements ✅ (primary) |
-4.0% | [-4.0%, -4.0%] | 1 |
| Improvements ✅ (secondary) |
- | - | 0 |
| All ❌✅ (primary) | -1.5% | [-4.0%, 0.9%] | 2 |
Binary size
Results (primary 0.6%)
This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
0.7% | [0.1%, 1.7%] | 33 |
| Regressions ❌ (secondary) |
- | - | 0 |
| Improvements ✅ (primary) |
-0.1% | [-0.2%, -0.0%] | 9 |
| Improvements ✅ (secondary) |
- | - | 0 |
| All ❌✅ (primary) | 0.6% | [-0.2%, 1.7%] | 42 |
Bootstrap: 767.632s -> 769.266s (0.21%) Artifact size: 332.19 MiB -> 332.16 MiB (-0.01%)
Looking at performance data borrow checking time in the Cycles benchmark for wg-grammar is the main culprit. I'm confused why that is.
The fact that this also fixes #120050 makes these changes still worth it to me.
@joboet @scottmcm Do you think the combination of #130887 + this PR, plus the fact it could also fix #120050, could have a chance of working out? I'm not too sure how to interpret the benchmarks and if there's a way of fixing the wg-grammar Cycles regression.
@paolobarbolini #130887 has been merged @rustbot label -S-blocked +S-waiting-on-author
I've rebased on top of master. Should we do another perf run?
Sure, with the rebase we might as well give it a shot
@bors try @rust-timer queue
Awaiting bors try build completion.
@rustbot label: +S-waiting-on-perf
:hourglass: Trying commit 29565d76bb97d949162d451b45abd0870f14458b with merge c79c24ad439e510f1baa231dcaf131000f552308...
:sunny: Try build successful - checks-actions
Build commit: c79c24ad439e510f1baa231dcaf131000f552308 (c79c24ad439e510f1baa231dcaf131000f552308)
Queued c79c24ad439e510f1baa231dcaf131000f552308 with parent d1d8e386c5e84c4ba857f56c3291f73c27e2d62a, future comparison URL. There are currently 3 preceding artifacts in the queue. It will probably take at least ~4.6 hours until the benchmark run finishes.
Finished benchmarking commit (c79c24ad439e510f1baa231dcaf131000f552308): comparison URL.
Overall result: ❌✅ regressions and improvements - please read the text below
Benchmarking this pull request means it may be perf-sensitive – we'll automatically label it not fit for rolling up. You can override this, but we strongly advise not to, due to possible changes in compiler perf.
Next Steps: If you can justify the regressions found in this try perf run, please do so in sufficient writing along with @rustbot label: +perf-regression-triaged. If not, please fix the regressions and do another perf run. If its results are neutral or positive, the label will be automatically removed.
@bors rollup=never @rustbot label: -S-waiting-on-perf +perf-regression
Instruction count
Our most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
0.7% | [0.2%, 3.0%] | 12 |
| Regressions ❌ (secondary) |
3.0% | [0.2%, 7.5%] | 16 |
| Improvements ✅ (primary) |
-0.9% | [-1.3%, -0.5%] | 2 |
| Improvements ✅ (secondary) |
-0.5% | [-1.2%, -0.2%] | 3 |
| All ❌✅ (primary) | 0.5% | [-1.3%, 3.0%] | 14 |
Max RSS (memory usage)
Results (primary 2.2%, secondary -0.1%)
A less reliable metric. May be of interest, but not used to determine the overall result above.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
4.9% | [1.1%, 7.3%] | 4 |
| Regressions ❌ (secondary) |
4.7% | [3.3%, 6.5%] | 3 |
| Improvements ✅ (primary) |
-3.3% | [-4.2%, -2.4%] | 2 |
| Improvements ✅ (secondary) |
-7.2% | [-8.5%, -5.9%] | 2 |
| All ❌✅ (primary) | 2.2% | [-4.2%, 7.3%] | 6 |
Cycles
Results (primary 1.7%, secondary 3.6%)
A less reliable metric. May be of interest, but not used to determine the overall result above.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
1.7% | [0.7%, 3.6%] | 3 |
| Regressions ❌ (secondary) |
3.6% | [2.2%, 10.5%] | 7 |
| Improvements ✅ (primary) |
- | - | 0 |
| Improvements ✅ (secondary) |
- | - | 0 |
| All ❌✅ (primary) | 1.7% | [0.7%, 3.6%] | 3 |
Binary size
Results (primary 0.6%, secondary 0.1%)
A less reliable metric. May be of interest, but not used to determine the overall result above.
| mean | range | count | |
|---|---|---|---|
| Regressions ❌ (primary) |
0.6% | [0.1%, 1.7%] | 38 |
| Regressions ❌ (secondary) |
0.1% | [0.1%, 0.1%] | 1 |
| Improvements ✅ (primary) |
-0.1% | [-0.1%, -0.1%] | 1 |
| Improvements ✅ (secondary) |
- | - | 0 |
| All ❌✅ (primary) | 0.6% | [-0.1%, 1.7%] | 39 |
Bootstrap: 692.944s -> 693.357s (0.06%) Artifact size: 372.01 MiB -> 371.96 MiB (-0.01%)
@scottmcm Do you think that these results are worth it given that this also acts as an alternative to #120050?
Personally I still want to do this, but at the same time with my reviewer hat on I don't know if I can justify the perf numbers. Anywhere outside the standard library I'd just do it, but Vec is important enough that we'll take less-elegant implementations for perf reasons :/
I wonder if maybe there's a tweak for it that could help. An idea that came to mind: what if it specialized on Copy, with that path having the obvious loop (which wouldn't even need SetLenOnDrop), and used repeat_n for the more-complex non-Copy path. My instinct is that the image regression is probably dealing in trivial Copy stuff, so emitting simpler code for that (no we-have-defer-at-home codepath) could even be good.
EDIT: or, I suppose, if we're not allowed to specialize on Copy any more this could just branch on if const { needs_drop::<T>() }? I don't know that that would help as much, though.
Specializing on Copy sounds like a great idea. This should now be ready for another perf run.
Let's give it a shot!
@bors try @rust-timer queue
Awaiting bors try build completion.
@rustbot label: +S-waiting-on-perf