cargo
cargo copied to clipboard
Introduce 'nice' value under cargo.toml -> [build]
Describe the problem you are trying to solve
Often enough I have cargo builds running in the background, and then I open up some entertainment while it finishes, but I often notice that my desktop is generally slow (no surprise, something is hammering the CPU in the background). I've began doing nice -n 20 bash -c "cargo build" (or something similar) recently, however, I still sometimes forget it and get frustrated to cancel the build and restart it with the (pretty verbose) nice prefix.
Describe the solution you'd like
Add a string/int nice config parameter to cargo.toml, under [build].
If int: An absolute value cargo has to attempt to set the nice value to on compilation threads/processes.
If string: A "relative" value (-2, +5) to try to set to on compilation threads/processes.
This setting is ignored on targets which don't support nice-esc prioritisation (windows comes to mind), though maybe a generic priority config value could be introduced which attempts to do this across many operating systems.
If the nice value is less than the current value, and the current running-as user has no privilege to bump down the nice value, then it could/should be ignored with a warning printed to stderr or stdout.
cc #9221
(i should note that jobs != absolute CPU usage, because of kernel scheduling and multi-threaded compilation and such, nice in this case then actually allows a good prioritization configuration value)
i'm looking for this exact functionality
i've tried aliases and bash functions but i think cargo supporting it directly would be... nice :smile:
i'm surprised more build tools haven't added support as it results in a much more responsive experience in browsers/apps/etc. while compiling
You could also try schedtool -B -e cargo <args> or schedtool -I -e cargo <args> for the BATCH and IDLE scheduling policies, depending on how much you want them deprioritized.
pgray, "nice" is not part of a build tool, but outside. if you want nice experience, use "nice cargo ...". build tool does not need this complexity. in windoes this is not caled nice, but start /low or so. again nothing to do with cargo.
a way to go might be to control number of jobs, as suggested in #12912. but personally i am fan of using nice - as long it does not cause out of memory killer to terminate my browser for example.
IMO this would be more fitting for .cargo/config.toml, rather than a crate's manifest - this build option doesn't really have anything to do with a specific crate.
Also, limit job count/CPU usage doesn't fulfill the same use-case, as it doesn't adapt to other system loads. With a low priority, jobs can still use 100% CPU if no other processes need it, while dropping as low as needed when the user is doing something else mildly intensive.
FWIW, OOM killed issue is tracked in https://github.com/rust-lang/cargo/issues/9157, as there might be other specific approaches for it.
This is really important for lower-end systems that can get completely hung installing basic software.
Case in point:
I just spun up an e2-micro virtual machine in Google Cloud Platform (GCP). I installed the Rust toolchain on it. Then I tried to install stu with cargo install stu, which is an S3 storage Terminal UI (TUI) application. The VM eventually completely hung, and I lost my SSH session to it, because it's compiling dependencies.
If Cargo were running as a lower priority Linux process, I probably would not lose connectivity to my remote servers.
You could also try
schedtool -B -e cargo <args>orschedtool -I -e cargo <args>for the BATCH and IDLE scheduling policies, depending on how much you want them deprioritized.
Linux introduced a new mechanism somewhat similar to SCHED_BATCH. Threads/processes can now set their preferred time slice length. Latency sensitive tasks can set a small time slice, meaning they get scheduled earlier but also preempted earlier. Compute-bound tasks can set a larger timeslice, meaning they get to stay longer on the CPU, but get scheduled later after the lower-latency tasks.
I have run cargo under perf sched record (and the performance cpu governor) and then looked at
perf sched timehist
[...]
186075.889699 [0028] rustc[2111921/2111909] 0.000 0.002 11.462
186075.894246 [0020] rustc[2111895/2111888] 10.605 0.012 0.012
186075.900738 [0028] rustc[2111921/2111909] 0.011 0.011 11.026
186075.934724 [0001] rustc[2111922/2111911] 0.000 0.013 56.238
186075.963687 [0006] rustc[2111930/2111909] 0.000 0.004 0.071
186075.963757 [0003] rustc[2111921/2111909] 0.015 0.015 63.004
186075.963777 [0005] rustc[2111931/2111909] 0.000 0.003 0.037
186075.963819 [0005] rustc[2111931/2111909] 0.023 0.008 0.018
186075.963923 [0005] rustc[2111931/2111909] 0.099 0.007 0.005
186075.963940 [0006] rustc[2111930/2111909] 0.006 0.006 0.246
186075.965697 [0003] rustc[2111921/2111909] 0.455 0.008 1.484
186075.965722 [0006] rustc[2111930/2111909] 1.770 0.008 0.011
186075.967515 [0003] rustc[2111921/2111909] 0.073 0.006 1.744
186075.996980 [0003] rustc[2111921/2111909] 29.456 0.009 0.008
186076.020717 [0035] rustc[2111925/2111912] 0.000 0.002 141.316
186076.024721 [0013] rustc[2111928/2111913] 0.000 0.015 144.227
186076.033716 [0013] rustc[2111928/2111913] 0.015 0.015 8.979
186076.106142 [0042] rustc[2111918/2111907] 0.013 0.013 223.622
186076.106174 [0005] rustc[2111934/2111907] 0.000 0.004 0.041
186076.106275 [0005] rustc[2111934/2111907] 0.096 0.008 0.004
186076.106284 [0005] rustc[2111934/2111907] 0.005 0.000 0.004
186076.106296 [0006] rustc[2111933/2111907] 0.000 0.004 0.292
186076.115763 [0042] rustc[2111918/2111907] 0.431 0.010 9.189
186076.115791 [0006] rustc[2111933/2111907] 9.483 0.009 0.010
186076.118191 [0042] rustc[2111918/2111907] 0.088 0.011 2.339
186076.158690 [0001] rustc[2111922/2111911] 0.029 0.029 223.935
186076.251625 [0002] rustc[2111922/2111911] 0.016 0.016 92.918
186076.251684 [0002] rustc[2111922/2111911] 0.035 0.035 0.023
186076.252241 [0026] rustc[2111922/2111911] 0.014 0.014 0.542
186076.252426 [0002] rustc[2111922/2111911] 0.013 0.013 0.171
186076.252533 [0000] rustc[2111922/2111911] 0.015 0.015 0.091
186076.253917 [0013] rustc[2111928/2111913] 0.034 0.034 220.166
186076.279156 [0033] rustc[2111467/2111452] 409.202 0.013 0.019
186076.378679 [0035] rustc[2111925/2111912] 0.014 0.014 357.948
186076.379859 [0042] rustc[2111918/2111907] 261.652 0.009 0.015
186076.633675 [0026] rustc[2111922/2111911] 0.018 0.018 381.123
186076.696100 [0026] rustc[2111922/2111911] 0.178 0.178 62.246
186076.696147 [0006] rustc[2111948/2111911] 0.000 0.005 0.054
186076.696175 [0006] rustc[2111948/2111911] 0.009 0.000 0.019
186076.696271 [0006] rustc[2111948/2111911] 0.091 0.006 0.004
186076.696289 [0022] rustc[2111947/2111911] 0.000 0.017 0.305
186076.697999 [0026] rustc[2111922/2111911] 0.459 0.009 1.439
186076.698028 [0022] rustc[2111947/2111911] 1.725 0.010 0.013
186076.702520 [0026] rustc[2111922/2111911] 0.083 0.007 4.437
186076.711036 [0026] rustc[2111922/2111911] 8.505 0.008 0.010
186076.889668 [0011] rustc[2111925/2111912] 0.026 0.026 510.962
186076.993663 [0038] rustc[2111928/2111913] 0.011 0.011 739.733
186077.018656 [0011] rustc[2111925/2111912] 0.028 0.028 128.958
186077.145660 [0011] rustc[2111925/2111912] 0.027 0.027 126.976
186077.173946 [0038] rustc[2111928/2111913] 0.007 0.007 180.275
186077.209658 [0011] rustc[2111925/2111912] 0.021 0.021 63.976
186077.401651 [0011] rustc[2111925/2111912] 0.021 0.021 191.970
186077.465654 [0011] rustc[2111925/2111912] 0.020 0.020 63.982
186077.465662 [0013] rustc[2111928/2111913] 0.014 0.014 291.702
186077.657642 [0011] rustc[2111925/2111912] 0.021 0.021 191.966
186077.785689 [0011] rustc[2111925/2111912] 0.009 0.009 128.038
186078.041641 [0011] rustc[2111925/2111912] 0.018 0.018 255.932
186078.120652 [0012] rustc[2111928/2111913] 0.019 0.019 654.969
186078.143626 [0011] rustc[2111925/2111912] 0.046 0.046 101.938
186078.145621 [0013] rustc[2111928/2111913] 0.022 0.022 24.946
186078.169631 [0011] rustc[2111925/2111912] 0.006 0.006 25.999
186078.332640 [0013] rustc[2111928/2111913] 0.005 0.005 187.012
186078.425636 [0011] rustc[2111925/2111912] 0.021 0.021 255.982
186078.489621 [0011] rustc[2111925/2111912] 0.021 0.021 63.963
186078.560641 [0012] rustc[2111928/2111913] 0.016 0.016 227.984
186078.580718 [0011] rustc[2111925/2111912] 0.020 0.020 91.077
186078.580746 [0022] rustc[2111973/2111912] 0.000 0.004 0.179
186078.580894 [0022] rustc[2111973/2111912] 0.007 0.000 0.140
186078.582742 [0006] rustc[2111975] 0.000 0.004 1.359
[...]
Which looks like rustc stays on-cpu for 100ms and more during compute-heavy phases, though it also experiences shorter timeslices, presumably when it's blocked by IO or mutexes.
So I maybe we could tell the kernel that cargo/rustc's preferred time-slice is something 50-100ms which should give it a bit of a penalty if there are other running tasks with the default sched_runtime (3ms on my system).
Though someone would have to test whether that helps interactive applications. If the system is bottlenecked on IO it probably wouldn't do much.