rust-analyzer icon indicating copy to clipboard operation
rust-analyzer copied to clipboard

improve latency of goto definition

Open BurntSushi opened this issue 6 years ago • 48 comments

So I was finally able to get rust-analyzer working in vim (with ALE), and it appears that fixing #1474 did the trick. So thank you! Working in crates like regex, I can definitely notice the speed improvements. RLS can take quite some time to catch up to changes in the source code, but rust-analyzer is almost instant.

I did, however, find a place where RLS appears to be much better: the latency at which go-to definition works. Here's the case I'm fiddling with right now:

  1. Clone the rust-lang/regex repo.
  2. Open src/prog.rs, go to line 24 and put the cursor over InstPtr.
  3. Save the file. This should cause rls/rust-analyzer to start processing.
  4. Run :ALEGoToDefinition as quickly as one can after saving. (Presumably, this problem isn't specific to ALE, so I guess replace this step with whatever command lets you jump to a declaration in your environment.) This should move the cursor just a few lines up to where the InstPtr type is defined.

When I do this for RLS, (4) succeeds pretty much instantly, even though RLS is still pegging my CPU. Presumably, RLS builds whatever index structure it needs for goto definition first, and is then able to use it even though it's still doing other work. (This is a guess based on observed behavior. I'm not familiar with RLS internals.)

However, when I do this for rust-analyzer, it takes about 5-7 seconds for the goto definition to actually move my cursor to the declaration site. Ideally, this should be as fast as RLS.

The overall goal I'm requesting here, I think, is to minimize the latency at which goto definition works after opening a file. This is a fairly common workflow for me personally, especially in code projects that I'm unfamiliar with, which lets me jump around to definition sites as quickly as I want.

Note that an alternative sequence of steps from the above set is to simply run :ALEGoToDefinition twice. The first time causes the language server to start, and the second time actually allows the language server to respond to the request. Now, with RLS, it seems like I can run goto definition twice as quickly as I want, and it will always succeed on the second request. But with rust-analyzer, I have to wait a second or two after the first press, otherwise the second request seems to just get ignored. Once the second request is made (again, after waiting for rust-analyzer to do its initialization), it is reliably successful, but only after 5-7 seconds, as with above.

Now, ideally, I could open a file, issue goto definition and have that succeed almost immediately. However, needing to do it twice (or save the file first) is an acceptable work-around to me personally. The much more important thing here, IMO, is minimizing overall latency. Moreover, I also understand that needing to do these key presses twice might not be a problem with the server, but rather the client. So it's less clear whether it's actually a bug in rust-analyzer or not.

Hopefully this is enough info to go on. These bug reports feel like they are super hard to work through. :-) My hope is that the latency question isn't specific to my setup, and it can be reproduced in other environments.

BurntSushi avatar Aug 04 '19 13:08 BurntSushi

I'm no longer using ALE, but I tested your scenario in VS Code and LanguageClient-neovim.

The Code extension runs cargo check, so I left it a couple of seconds to finish after opening the project. I then opened prog.rs and quickly Alt-clicked on the InstPtr at line 24. rust-analyzer is lazy and only processes the code when needed, so the first request is slower (~7s). The second go to definition request on the same symbol completes instantly, and so does one on Inst::Match at line 121. If I restart Code, open the file and wait for it to finish the analysis, even the first request completes instantly.

With LanguageClient-neovim, the first request takes about 8 seconds to complete. Following go to definition requests in the same file are either instant or take 2-3 seconds to finish. Waiting after opening the file doesn't help. There is no need to save the file for it to be processed.

If I do save the file, it gets processed in full. Go to definition has the same delay of around 8 seconds. If I save the file a second time with no changes, go to definition works instantly; same after inserting an empty line.


I think the Code extension requests some eager analysis for the file (maybe because of the highlighting) that the VIM extension does not. Besides that they behave the same: the first request can be slow, and later ones finish instantly if the file was fully processed.

Maybe the behaviour you're seeing is caused by ALE?

lnicola avatar Aug 04 '19 14:08 lnicola

Right, the first request it slow, but subsequent requests after that are fast. That's what I meant by latency of opening a file using jump-to-definition.

I don't have enough knowledge to know whether this is being caused by ALE or not, but RLS has very little latency even when used with ALE.

BurntSushi avatar Aug 04 '19 14:08 BurntSushi

So you're only seeing this issue on the first go to definition request in each file, until closing the editor? I think that's normal. RLS tends to run eagerly, while rust-analyzer is lazy. I see ~~two~~ three ways to improve on this:

  • save the analysis information and re-open it in the next session
  • when opening a file, run the analysis only for that file, similarly to what the Code extension does; ~~I'm not sure if this can be implemented in RA in a general way~~ see the DidOpenTextDocument notification
  • after start-up, run the analysis over the project (i.e. what ra_cli analysis-stats does), with a lower priority

lnicola avatar Aug 04 '19 14:08 lnicola

IDEs like eclipse or IntelliJ IDEA tends to perform indexing of the entire project on open, mostly in background. After indexing is finished, anything works instantly.

eupn avatar Aug 04 '19 14:08 eupn

IDEs like eclipse or IntelliJ IDEA tends to perform indexing of the entire project on open, mostly in background. After indexing is finished, anything works instantly.

I think this is intentional, to make it work better on very large projects (think of the compiler). Saving the analysis results to a file would be great, bringing both of best worlds. We could load them, then check for modified files in the background. But I'm not offering to implement this.

lnicola avatar Aug 04 '19 14:08 lnicola

So you're only seeing this issue on the first go to definition request in each file, until closing the editor?

Hmm, I don't think it's a per-file thing. Once the first goto definition completes, subsequent goto definition requests complete almost instantly, as long as they are within the same crate. (If I move to a different crate in the same Cargo workspace, then I see the long latency again. And again, after the first request completes, subsequent goto definition requests for that crate complete almost instantly.)

I think that's normal. RLS tends to run eagerly, while rust-analyzer is lazy.

To be clear, I think this is a bug regardless of whether it's expected behavior or not. I don't know whether RLS is saving an index to disk or not, but even though RLS generally takes longer to settle down after starting for a project, it still services goto definition requests nearly instantly.

BurntSushi avatar Aug 04 '19 14:08 BurntSushi

Hmm, I don't think it's a per-file thing. Once the first goto definition completes, subsequent goto definition requests complete almost instantly, as long as they are within the same crate. (If I move to a different crate in the same Cargo workspace, then I see the long latency again. And again, after the first request completes, subsequent goto definition requests for that crate complete almost instantly.)

This is because ALE was configured to set project root to the nearest Cargo.toml, for workspace setup, this is not correct. I was having same problem with LanguageClient-neovim, changing the root finder to find Cargo.lock resolved the problem for me. With that config, only first request will be slow for the whole project, which is acceptable I think.

Anyway, storing index to disk should be a feature of rust-analyzer, what do you think @matklad ?

unrealhoang avatar Aug 05 '19 05:08 unrealhoang

Thanks for the report!

I have a hypothesis for why RLS is fast and rust-analyzer is slow in this case.

RLS has a fast-path for when save-analysis is not ready. Specifically, it uses racer to serve goto definition in this case, which gives fast, but approximate results.

On the other hand, rust-analyzer always uses precise (but, of course, quite incomplete at the moment) analysis, and it needs some time for initial analysis.

At this moment, I fear that several second's delay for the first action after opening a project is a deliberate trade off. That is, the imagined workflow is that you open a project in your editor of choice, spend about 10-seconds without smart IDE features, but, after that, everything is instant until you stop working on the project. From your issue description it seems that you hit this "initial loading" path quite a bit more often than I expect. @unrealhoang explanation seems plausible: if the client creates a fresh server instance for each package of n in the workspace, you'll see slow-down n times (and the memory usage will be n times larger as well). Another explanation could be that, when you work in vim, you close and open the editor for different files, and that doesn't allow vim to persist the analyzer process between the sessions. These two problems could be addressed I think, but I'll need a conformation that they are indeed real culprit here.

Long term, I have a couple of ideas how to make initial processing faster.

First, at the moment rust-analyzer deliberately does not persist any analysis results to disk, and does a from-scratch analysis on start-up. This is done in order to avoid complexity of IO and state reconciliation. It also pushes us to make initial analysis acceptably fast :) Long term, we should implement persistence, by either adding on-disk storage to salsa, or by adding .rmeta files as alternatives to from-source analysis.

Second, the core of the problem here is that rust's name-resolution rules are hard. Naively, one would expect that, in prog.rs's case, all that the IDE needs to do is to parse this single file and figure out that the definition is couple of lines above. However, in the worst-case, because of macros and glob imports, name resolution works on the crate granularity, and requires a fixed-point iteration algorithm. Currently, rust-analyzer implements only the worst-case algorithm, so, during that 5-7 seconds delay, rust-analyzer process each module of regex, core, std, liballoc, etc. It seems like it should be possible to implement some kind of fast-path (if the module doesn't have glob imports and macros, don't process the whole crate), but it's unclear how to do that correctly and without duplicating the logic.

Third, we can do something like RLS, and implement explicit dumb mode, which works until the analysis is done, but might give you incorrect results.

So, yeah, TL;DR is that it's won't-fix in the short term, but I am curious specifics of Vim here, because it seems like it hits from-scrach analysis more often than it should.

Also, one short-term fix we can add here is to kick analysis as soon as the user opens a file, as opposed to waiting until they actually invoke goto definition.

matklad avatar Aug 05 '19 08:08 matklad

@matklad Thanks for the explanation! I appreciate it.

but I am curious specifics of Vim here, because it seems like it hits from-scrach analysis more often than it should.

Everyone's workflow is likely to be different, but I do generally keep one vim instance open for each repository I work on. That vim instance typically stays open for quite some time. So paying an initial cost there and then having effectively instant results isn't too much of a burden.

The problem is that I spend a lot of time reading code. Checking out a repository, opening some files and reading and understanding code in that repository is a fairly common thing for me. Each time I clone a repository, I'll open some files in vim. When I open those files, I do it because I want to try to read and understand some portion of code. I'll inevitably, at some point, utilize goto definition to find the definition of some type, but become frustrated when it doesn't work.

So I guess teasing this apart, there are two issues here:

  1. The latency of using goto definition itself. i.e., The time it takes from the first request of goto definition until it succeeds.
  2. The fact that, as far as I can tell, the first goto definition request always fails if that goto definition request was the first interaction with LSP that causes the server to start. So to go back to my code reading example, I might be looking at some code for a few minutes before I even try to use goto definition. But since I'm just reading code, and haven't saved a file, as far as I can tell, rust-analyzer hasn't started yet. So the first goto definition request I send merely starts the server. But that request never succeeds, so I have to issue a second goto definition request after some amount of time has passed. (How, do I, as the user, know when I have waited a sufficient amount of time? I don't. Which just adds to the frustration. The latency of that first request makes it worse, because from my perspective, nothing is happening. So I wind up spamming goto definition requests until they work, which also has the ill side effect of adding a bunch of entries to my tag stack.)

To be clear, as I said before, the extent to which these are problems with the client vs the server is not clear to me, since I'm not familiar with LSP internals. I'm just trying to describe the problem I'm seeing as an end user. :-)

Also, one short-term fix we can add here is to kick analysis as soon as the user opens a file, as opposed to waiting until they actually invoke goto definition.

This would probably help in some non-trivial number of cases, yes.

The concerns about I/O synchronization are definitely appreciated. That's a huge pain to get right. And having a second fast path is also annoying. However, in my experience, these little UX bugaboos are important to have work well. I don't know whether my workflow is representative, but I wouldn't be surprised if it was, at least for vim users.

BurntSushi avatar Aug 05 '19 10:08 BurntSushi

Yeah, the 2. definitely seems like something that shouldn't be happening. Could you share a minimal vim config with the plugin you are using? I'd love to dig into this, but it's hard for me to reproduce it myself, as I am not a vim user :)

I am also curious if @unrealhoang suggestion helps with this problem:

If I move to a different crate in the same Cargo workspace, then I see the long latency again

If that's the case, we might want to adjust our docs for vim setup.

matklad avatar Aug 05 '19 11:08 matklad

The issue of the first failing request doesn't happen in Code or LanguageClient-neovim, so my guess is that it's a client problem.

lnicola avatar Aug 05 '19 11:08 lnicola

@lnicola even if it's a client problem, I am still interesting in debugging and fixing it :-)

matklad avatar Aug 05 '19 11:08 matklad

I'll try to work on getting a confirmed minimal vim config for you when I get home today. However, I can just post the relevant portions of my vim config now:

call plug#begin('~/.vim/plugged')

Plug 'w0rp/ale'
let g:ale_linters = {'rust': ['cargo', 'rls']}
let g:ale_rust_rls_executable = 'ra_lsp_server'
let g:ale_rust_rls_toolchain = 'stable'
let g:ale_rust_rls_config = {
      \ 'rust': { 'clippy_preference': 'off' }
      \ }
let g:ale_lint_on_enter = 0
let g:ale_lint_on_filetype_changed = 1
let g:ale_lint_on_save = 1
let g:ale_lint_on_text_changed = 'never'
let g:ale_lint_on_insert_leave = 0
let g:ale_completion_enabled = 0

call plug#end()

The relevant goto definition command is :ALEGoToDefinition.

Note that the above config assumes that you have vim-plug installed. Once the above is in your vim config, then run :PlugInstall. (You can keep a clean vim install afterwards by removing the ALE config above, and then running :PlugClean, which should delete it.)

Also note that I am not particularly attached to any particular LSP client. I tried several of them (including LanguageClient) but settled on ALE for reasons that I can't remember. When I get home, I'll try out some of the other vim LSP clients and see if they have the same problem.

BurntSushi avatar Aug 05 '19 12:08 BurntSushi

@BurntSushi from your config, I can see that ALE does not start language server when you open a file, as let g:ale_lint_on_enter = 0. Also, I just found from ALE's source code: https://github.com/dense-analysis/ale/blob/135de34d22/ale_linters/rust/rls.vim It's indeed using Cargo.toml for project's root, which is not working as intended for cargo workspace projects. Unfortunately, that is not configurable for ALE. You can either patch it (to Cargo.lock) or use different language client. Minimal config for LanguageClient to support go to definition:

call plug#begin('~/.vim/plugged')
Plug 'autozimu/LanguageClient-neovim', {
    \ 'branch': 'next',
    \ 'do': 'bash install.sh',
    \ }
call plug#end()

let g:LanguageClient_serverCommands = {
    \ 'rust': ['rustup', 'run', 'stable', 'ra_lsp_server'],
    \ }
 let g:LanguageClient_rootMarkers = {
     \ 'rust': ['Cargo.lock'],
     \ }
" You can map differently, of course.
nnoremap <silent> gd :call LanguageClient_textDocument_definition()<CR>

You can control the Language Server start up yourself by:

let g:LanguageClient_autoStart = 0

And start LanguageServer manually by calling

:LanguageClientStart

unrealhoang avatar Aug 05 '19 12:08 unrealhoang

Ah interesting, thanks for catching my config error. I must have disabled that at some point, probably because of RLS.

I'll also take a look at LanguageClient too. Thanks!

BurntSushi avatar Aug 05 '19 12:08 BurntSushi

I'm using vscode and goto definition is always slow (it takes time on start to enable this, but even after it's ready it takes ~5 seconds each time to follow definition). Is that expected or I have some wrong configs?

kanekv avatar Nov 11 '19 06:11 kanekv

@kanekv the first requests will be slower because rust-analyzer is lazy and doesn't parse and analyze the whole project on startup. But the next ones should be quite fast (instant on a small project) if they touch the same files.

In Code there's a "Rust Analyzer: Status" command that shows how much time the last LSP requests took, and there's also the profiling support if you think you've found a bug.

lnicola avatar Nov 11 '19 06:11 lnicola

I've tried to follow same method twice, one immediately after another:

  753 textDocument/codeAction             3651ms
  751 rust-analyzer/inlayHints            4781ms
  756 rust-analyzer/inlayHints            1933ms
  760 textDocument/hover                  40ms
  764 textDocument/hover                  0ms
* 745 textDocument/codeLens               0ms
  744 textDocument/codeAction             0ms
  748 textDocument/codeAction             0ms
  749 textDocument/codeAction             0ms
  739 rust-analyzer/inlayHints            2570ms

And 2nd time:

  832 textDocument/definition             2604ms
* 813 textDocument/codeAction             0ms
  818 textDocument/codeAction             0ms
  820 textDocument/definition             0ms
  823 textDocument/foldingRange           0ms
  824 textDocument/codeLens               1ms
  826 textDocument/codeLens               0ms
  829 textDocument/codeAction             3590ms
  828 textDocument/documentHighlight      3832ms
  822 rust-analyzer/inlayHints            4807ms

I'll try to profile if this is not expected behavior. P.S. I've noticed that it seems it only caches last definition, if I don't move mouse over other methods it works instantly. If I hover one method, then another and then come back to the first one it shows Loading... again and takes ~5 seconds.

kanekv avatar Nov 11 '19 06:11 kanekv

That's certainly not what I'm seeing while editing rust-analyzer itself. Most definition and resolve requests are 0ms for me, with hints and decorations taking ~200ms. I even tried hitting F12 instead of using the mouse to prevent the symbol from getting looked up on hover.


  924 textDocument/codeAction             1ms
  926 rust-analyzer/inlayHints            1ms
  925 rust-analyzer/decorationsRequest    10ms
  927 textDocument/foldingRange           3ms
  928 textDocument/codeLens               4ms
  929 textDocument/codeLens               4ms
  930 textDocument/codeAction             1ms
  931 codeLens/resolve                    2ms
* 921 textDocument/codeAction             3ms
  922 textDocument/definition             0ms

It does get slower when cargo check is running, but I assume you would have noticed it. It's also noticeably slower on large projects like rust-lang/rust. FWIW, I'm on a Linux middle/high-end laptop, but with an SSD.

lnicola avatar Nov 11 '19 07:11 lnicola

@lnicola I do have cargo watch enabled (it seems on mac os it's a must, otherwise lsp process eats 100% of cpu). I have top last year model with i9 cpu and 32gb ram.

kanekv avatar Nov 11 '19 08:11 kanekv

I do have cargo watch enabled (it seems on mac os it's a must, otherwise lsp process eats 100% of cpu).

Ugh, try enabling rust-analyzer.useClientWatching unless you don't already have it enabled. There are some issue with file watching on MacOS, but I don't think the cargo watch integration has any effect here. Or maybe you're making a confusion between that and rust-analyzer.enableCargoWatchOnStartup. The latter (which is the one I was referring to) just runs cargo check on startup and when you save a file. It does slow down RA while cargo check is running.

lnicola avatar Nov 11 '19 08:11 lnicola

I have both enabled. It seems my numbers are way off, any ideas how to help fixing it?

kanekv avatar Nov 11 '19 09:11 kanekv

First step here might be to create a separate issue, b/c it looks like in this case we are observing a specific bug, rather the general slowness due to how initialization works architecturally, at the moment.

If this issue happens only in a specific project, it would be great to get a link to this project.

On Mon, 11 Nov 2019 at 12:03, Vyacheslav Kim (Kane) < [email protected]> wrote:

I have both enabled. It seems my numbers are way off, any ideas how to help fixing it?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/rust-analyzer/rust-analyzer/issues/1650?email_source=notifications&email_token=AANB3M5YU2LC5LI6LSH3JX3QTENU5A5CNFSM4IJFLWWKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDWD44I#issuecomment-552353393, or unsubscribe https://github.com/notifications/unsubscribe-auth/AANB3M5DW6ZHMBEGLPZLU3LQTENU5ANCNFSM4IJFLWWA .

matklad avatar Nov 11 '19 09:11 matklad

Some basic steps I would try:

  1. test with a different project, maybe yours hits some corner case in RA; the RA repository should be a good pick
  2. wait for cargo check to finish, if it's the first time you're opening the project
  3. try the profiling support I linked above
  4. it's unlikely that this is platform-specific, but maybe try booting off a Linux live USB to see if it still happens there (can Macs even boot Linux?)

lnicola avatar Nov 11 '19 09:11 lnicola

@matklad @lnicola Looks like I've got more interesting info: when I go to definition using hotkey (F12) it works pretty fast (less than a second). When I try to Cmd+click in this case it takes a long time and shows Loading... tooltip before enabling hyperlink and ability to click.

kanekv avatar Nov 11 '19 10:11 kanekv

A Cmd+click will first activate hover before goto definition

kjeremy avatar Nov 11 '19 16:11 kjeremy

Yeah, is hover supposed to be slow? Should I open another issue about it?

kanekv avatar Nov 11 '19 21:11 kanekv

Please do.

lnicola avatar Nov 11 '19 21:11 lnicola

ctrl-click activating a hover first - is that desired behavior? or is it that the hover is set in flight before someone has time to click? I must admit I find the hover kicks in a little bit too soon at the moment and things start appearing when I'm not really intending for a hover to happen. Feels a little 'jumpy'. If the hover was a tad slower to kick off you might get your ctrl-clicks in before hand :-)

gilescope avatar Feb 17 '20 15:02 gilescope

@gilescope it is up to the language server client (the common case appears to be vscode) there is probably a setting somewhere to change the timing.

kjeremy avatar Feb 17 '20 15:02 kjeremy