rls
rls copied to clipboard
cpu usage pegged at 100% seemingly forever
I've had the RLS CPU usage pegged at 100% for hours now. I attached with lldb and found this:
(lldb) thread list
Process 68137 stopped
* thread #1: tid = 0x8035aa8, 0x00007fff8b814362 libsystem_kernel.dylib`read + 10, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
thread #2: tid = 0x8035aa9, 0x00007fff8b812db6 libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'dispatch-worker'
thread #3: tid = 0x8035aaa, 0x00007fff8b812db6 libsystem_kernel.dylib`__psynch_cvwait + 10
thread #4: tid = 0x8035ab7, 0x00007fff8b812db6 libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'request-worker-0'
thread #5: tid = 0x8035ab8, 0x00007fff8b812db6 libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'request-worker-1'
thread #6: tid = 0x8035ab9, 0x00007fff8b812db6 libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'request-worker-2'
thread #7: tid = 0x8035aba, 0x00007fff8b812db6 libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'request-worker-3'
thread #8: tid = 0x8035abb, 0x00007fff8b812db6 libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'request-worker-4'
thread #9: tid = 0x8035abc, 0x00007fff981d413a libsystem_platform.dylib`_platform_memmove$VARIANT$Haswell + 538, name = 'request-worker-5'
thread #10: tid = 0x8035abd, 0x00007fff8b812db6 libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'request-worker-6'
thread #11: tid = 0x8035abe, 0x00007fff8b812db6 libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'request-worker-7'
thread #12: tid = 0x8035ac5, 0x00007fff8b813efa libsystem_kernel.dylib`kevent_qos + 10, queue = 'com.apple.libdispatch-manager'
Most of the threads appear to be asleep on cond vars, but here are some backtraces from those that are not:
(lldb) bt
* thread #9: tid = 0x8035abc, 0x00007fff981d413a libsystem_platform.dylib`_platform_memmove$VARIANT$Haswell + 538, name = 'request-worker-5'
* frame #0: 0x00007fff981d413a libsystem_platform.dylib`_platform_memmove$VARIANT$Haswell + 538
frame #1: 0x000000010ac53049 rls`rls::actions::hover::def_docs::h84c8a7c02e311ea2 + 3401
frame #2: 0x000000010ac5997a rls`rls::actions::hover::tooltip::hb19c98035d91e833 + 15514
frame #3: 0x000000010ac95832 rls`rls::actions::requests::_$LT$impl$u20$rls..server..dispatch..RequestAction$u20$for$u20$languageserver_types..request..HoverRequest$GT$::handle::heca864bf9951aea0 + 34
frame #4: 0x000000010ad0cd38 rls`std::panicking::try::do_call::h20e73eac961a07c3 (.llvm.9386041415296837890) + 248
frame #5: 0x000000010fad964f libstd-574c17a59b6befe6.dylib`__rust_maybe_catch_panic + 31
frame #6: 0x000000010addd22f rls`_$LT$std..panic..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::hd46e553c79eda08c + 127
frame #7: 0x000000010ad0d31e rls`std::panicking::try::do_call::h492127b77d05a6f5 (.llvm.9386041415296837890) + 46
frame #8: 0x000000010fad964f libstd-574c17a59b6befe6.dylib`__rust_maybe_catch_panic + 31
frame #9: 0x000000010adfb244 rls`_$LT$rayon_core..job..HeapJob$LT$BODY$GT$$u20$as$u20$rayon_core..job..Job$GT$::execute::hef56bcace51779e3 (.llvm.10552412072698680047) + 452
frame #10: 0x000000010b9f4795 rls`rayon_core::registry::WorkerThread::wait_until_cold::h6021efbd2e67db36 + 277
frame #11: 0x000000010b9f4d79 rls`rayon_core::registry::main_loop::h650605c29f17019e + 649
frame #12: 0x000000010b9f5850 rls`std::panicking::try::do_call::h2dcd78a9d301588d (.llvm.15849102785470607581) + 48
frame #13: 0x000000010fad964f libstd-574c17a59b6befe6.dylib`__rust_maybe_catch_panic + 31
frame #14: 0x000000010b9f5626 rls`_$LT$F$u20$as$u20$alloc..boxed..FnBox$LT$A$GT$$GT$::call_box::h49dfef573fc1c964 + 198
frame #15: 0x000000010facc978 libstd-574c17a59b6befe6.dylib`std::sys_common::thread::start_thread::hce7e523090a00f71 + 136
frame #16: 0x000000010fa9cb59 libstd-574c17a59b6befe6.dylib`std::sys::unix::thread::Thread::new::thread_start::h461700e9de9353e2 + 9
frame #17: 0x00007fff9541599d libsystem_pthread.dylib`_pthread_body + 131
frame #18: 0x00007fff9541591a libsystem_pthread.dylib`_pthread_start + 168
frame #19: 0x00007fff95413351 libsystem_pthread.dylib`thread_start + 13
and
(lldb) bt
* thread #1: tid = 0x8035aa8, 0x00007fff8b814362 libsystem_kernel.dylib`read + 10, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
* frame #0: 0x00007fff8b814362 libsystem_kernel.dylib`read + 10
frame #1: 0x000000010fab6ef1 libstd-574c17a59b6befe6.dylib`_$LT$std..io..stdio..StdinLock$LT$$u27$a$GT$$u20$as$u20$std..io..BufRead$GT$::fill_buf::h7249bc63fbf626cd + 113
frame #2: 0x000000010ac40d6e rls`std::io::append_to_string::h79f80e933a63f367 + 62
frame #3: 0x000000010acfb0a0 rls`_$LT$rls..server..io..StdioMsgReader$u20$as$u20$rls..server..io..MessageReader$GT$::read_message::h8ca8efb05b84c1cc + 176
frame #4: 0x000000010ac7d894 rls`rls::server::run_server::h2fb6f8dde9ff33a2 + 708
frame #5: 0x000000010ad95219 rls`rls::main_inner::h9b69d1f4f8f70c58 + 969
frame #6: 0x000000010ad94e49 rls`rls::main::h43927dbe0d67789f + 9
frame #7: 0x000000010ad5af06 rls`std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::h33dc9d029f2badde + 6
frame #8: 0x000000010fac7f68 libstd-574c17a59b6befe6.dylib`std::panicking::try::do_call::h1121971afcb56fd2 (.llvm.2181859099862282205) + 24
frame #9: 0x000000010fad964f libstd-574c17a59b6befe6.dylib`__rust_maybe_catch_panic + 31
frame #10: 0x000000010faaea1d libstd-574c17a59b6befe6.dylib`std::rt::lang_start_internal::h5737820e17c1b194 + 237
frame #11: 0x000000010ad956fc rls`main + 44
frame #12: 0x000000010ac25e34 rls`start + 52
I'm curious if there's an edge case where the doc extraction is getting into an infinite loop.
Would you be able to build and run using: https://github.com/aloucks/rls/tree/extract-docs-max-line-reads
Steps to reproduce would be really useful too, is this bug occurring reliably?
Pretty reliably, yes. When editing chalk.
How do I build + run from a custom branch? I've only ever used the RLS from rustup :)
There are some instructions here: https://github.com/rust-lang-nursery/rls/blob/master/contributing.md
If using vscode, you can set the rust-client.rlsPath
to point to your built copy of rls. Note that you'll need to update your path according to the link above. Alternatively you can copy/overwrite the rls executable in the $(rustc --print sysroot)/bin
directory.
I'm seeing this very often with trust-dns as well.
Seeing the same issue on a project of mine, it'll just run for hours. Consuming a full core, without any sign of getting better, until I kill it. (Using it through emacs)
Can you set RUST_LOG=rls::actions::hover=TRACE
and test if this is still an issue after #1091 is merged and the change is reflected in the latest nightly?
Can you set
RUST_LOG=rls::actions::hover=TRACE
and test if this is still an issue after #1091 is merged and the change is reflected in the latest nightly?
These changes have been merged. Could you test if this still happening with the latest nightly?
I just updated nightly and am not seeing the issue in my projects. I’ll post back if I do see it again.
Nice work!
so far things are looking good on my side as well with the latest nightly version On 24. Oct 2018, 02:18 +0200, Benjamin Fry [email protected], wrote:
I just updated nightly and am not seeing the issue in my projects. I’ll post back if I do see it again. Nice work! — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
@nikomatsakis are you still seeing this behavior with the latest nightly RLS while editing chalk?
I've been running on Chalk for a while and can't repro, so closing. @nikomatsakis feel free to reopen if you can still repro.
This came back :( Pretty much happens every time when I am working on https://github.com/dignifiedquire/pgp now. There is one change though, it stays at 150% cpu usage, even after I stop doing any changes to the code for hours (until I kill it)
Will post a trace in a bit
-
rls-preview 0.130.5 (1c755ef 2018-10-20)
Edit added rls version Edit2 I haven't figured out how to do a trace. using it through https://github.com/emacs-lsp/lsp-rust and not sure how to get the output and the env flag set
Pretty much happens every time when I am working on https://github.com/dignifiedquire/pgp now.
What kind of work are you doing to trigger the high CPU use? (What combination of edits and builds?)
what kind of work:
while code != working: make small edits & run tests
Interestingly enough, after I posted this, and tried to reproduce another time it is not happening. Due to my setup I had a bunch of different other rust based projects open, which seemed to have lingering rls instances attached to them. Now that all of those are gone things to work well for the moment. Sorry for not being more helpful, I will try and find out more as soon as it happens again.
I noticed this the other day while doing a lot of edits, commits and then running cargo clippy
from the terminal, while it was clear that the RLS was running concurrently. Part of my script to run clippy is to first cargo clean -p my_project
. So I think I cause a lot of churn on the project. I do this generally from the terminal in vscode, if that matters that vscode is always in the foreground.
I am experiencing this as well. It happens when I have bugs AND markdown documents at the same time.
I see the same results sometimes the CPU usage is around 200% should I open another bug request for this since this one is closed?
this is the case for me around 300% @nrc
@NikosEfthias which rls --version
do you have? Is there anything specific about the scenario you encounter this in (it'd help a lot to nail down the reproduction steps)?
this is happening to me as well. Using rls 1.34.0 (6840dd6 2019-03-10). Very high cpu with VSCode and eating all 16GB of ram. My computer completely crashes from time to time while rls is running since this update
Could you share your repositories inside which you encounter this issue? Preferably with what you might have been doing at the time (e.g. last time hovering a first item in a file caused RLS to busy loop)
It happens while editing code normally. In my case i had updated VScode as well on the same day, and now the WatcherService from VSCode started eating up all my ram until my computer completely freezes. This is not a problem of RLS (at least on my case)
@Xanewok mine is rls-preview 1.33.0
I'm using Emacs lsp-mode, and I've found a consistent way to reproduce this.
-
cargo init
- Add
libc = "*"
under dependencies - In
main.rs
, add the following line at the top:use libc::{c_int, pid_t, user_regs_struct};
- Move the cursor onto that line. → RLS will start using 100% CPU.
rls --version
: rls-preview 1.33.0 (9f3c906 2019-01-20)
I've found an easier way to reproduce (without any specific text editor):
-
cargo init
- Add
libc = "*"
under dependencies - In
main.rs
, add the following line at the top:use libc::{c_int, pid_t, user_regs_struct};
-
rls --cli
- Type
hover src/main.rs 0 25
→ RLS will start using 100% CPU.
These are good reproduction steps. I see the issue with rls +stable --cli
, however rls +nightly-2019-03-23 --cli
seems to work fine. So this is an issue with the latest stable release that is presumably already fixed in the current code base.
I'll re-test when rust 1.34 is out, if I get time, as the next stable rls should be fixed.
Yep 1.34 pre-release seems fixed.
I've encountered this issue on every single Linux machine I had (Ubuntu & Fedora): RLS starts building, maxed at 100% on a core. Once the building process is done the laptop fans quiet down, and CPU usage drops. After a single edit of a file, RLS seems to start building the entire repo again. This happens over and over, which means one core has a usage of 100% seemingly forever.
My laptop is powerful enough, but the constant spinning fans start to annoy my coworkers...
Thanks
With the recent work on out-of-process compilation (#1536) it hopefully can distribute the load more evenly. Another potential boost is when we ship the parallel compiler by default but that won't be soon, I'm afraid.
I'm seeing rls taking up 100% CPU on my machine, thought it was chrome but turn out it's rls, I use ale with rls
as formatter/linter/fixer/whatever, and vim-lsp-settings to setup completion.
don't know why but it happens when I open chrome, at that time rls seems to cause high load as well.
I suspect that rls
under ale-fixer/linter portion causing this issue but it's hard to reproduce.