completion-nvim icon indicating copy to clipboard operation
completion-nvim copied to clipboard

Performance issues with large codebases

Open ahmedelgabri opened this issue 4 years ago • 46 comments

I'm facing massive performance issues with a project that I'm working on which contains a lot of files. In this codebase completion is nearly unusable, causes lots of Lua & out of memory errors & crashes the native LSP.

One of the very common errors I get is this

Error executing luv callback:
...ovim/HEAD-7ba28b1/share/nvim/runtime/lua/vim/lsp/rpc.lua:558: cannot resume dead coroutine
Press ENTER or type command to continue

To make sure this was not the native LSP itself, I did the following:

  • Removed all the plugins
  • Installed all of them including nvim-lspconfig but without completion-nvim
  • Tested (with omni-completion setlocal omnifunc=v:lua.vim.lsp.omnifunc) -> everything ran very smoothly & fast
  • Installed completion-nvim
  • Tested -> performance issues all over the place (editor is locked, completion takes a lot of time, etc…)

I tried to see if coc.nvim has the same issues or no & I tried that & it worked fine too.

This is my completion.nvim config & this is my LSP config

Note, the same exact LSP config works fine without completion.nvim which also means that the TypeScript LSP is not the culprit too.

ahmedelgabri avatar Sep 11 '20 10:09 ahmedelgabri

That's sad to hear. In fact, I'm also facing the same issue with large c++ file with clangd as the lsp. Not quite sure why the file size will cause the performance drag though but I'll look into that. Right now the temporary solution is using :CompletionToggle to toggle on and off completion when facing giant file size(shouldn't be often).

haorenW1025 avatar Sep 11 '20 15:09 haorenW1025

I found a bit of bad behavior.

For example, a typical completion plugin sends one request in | -> console| case but the completion-nvim send 7 requests for c o n s o l e.

But this behavior is right if the server returns isIncomplete=true on the first response.

hrsh7th avatar Sep 14 '20 16:09 hrsh7th

But a right behaviour doesn't always mean a good experience, maybe it should debounce & batch these requests instead of sending letter by letter

ahmedelgabri avatar Sep 15 '20 09:09 ahmedelgabri

@ahmedelgabri Can you provide the project you mentioned? I've done a little test on my side but my issue seems to be on the clangd side.. Otherwise I would have to push a testing branch and you might need to help me test about that.

haorenW1025 avatar Sep 15 '20 11:09 haorenW1025

@ahmedelgabri Can you provide the project you mentioned? I've done a little test on my side but my issue seems to be on the clangd side.. Otherwise I would have to push a testing branch and you might need to help me test about that.

Unfortunately I can't do that. I'm fine with testing though

ahmedelgabri avatar Sep 15 '20 11:09 ahmedelgabri

Actually @mg979 have been working on a fork of completion-nvim (link). In which the redundant timers was refactored out and he solve some other issue as well. @ahmedelgabri can you take some time to test if it works for you? If so we just need to start porting things back to completion-nvim, if not we'll need to spend some more time to clarify what went wrong... Note that the setting is not compatible with completion-nvim (make it easier to test), you might have to spend some time reading the docs. Some setting @mg979 suggest is

let g:autocomplete = {'confirm_key': "\<C-Y>"}
let g:autocomplete.chains = {
       \ 'vim': ['snippet', 'keyn', 'file', 'cmd'],
       \ 'cmake': {'String': ['keyn'], 'default': ['file', 'omni', 'keyn']},
       \}

haorenW1025 avatar Sep 16 '20 13:09 haorenW1025

I have tried the fork the good thing that it at least let me type & doesn't block my editor also snippets are shown right away

But it's still very slow & sometimes doesn't show completion at all, here are a couple of gifs on how it works in a typescript.tsx file where I do have react imported at the top of the file (this is a real file from that repo I talked about but I can't show details sorry)

I intentionally was typing slowly to see how slow it is, but usually I'll type faster than that.

Here is the config that I used, which is also shown on the left side of each gif

This file was added to ~/.config/nvim/after/plugin/autocomplete.vim

autocmd BufEnter * call autocomplete#attach()

let g:autocomplete = get(g:, 'autocomplete', {})
let g:autocomplete.snippets = 'vim-vsnip'
let g:autocomplete.chains = {
      \ 'typescript' : [ 'path', ['lsp', 'snippet'], 'keyn' ],
      \ 'typescript.tsx' : [ 'path', ['lsp', 'snippet'], 'keyn' ],
      \}

imap            <Tab>   <Plug>(TabComplete)
inoremap <expr> <S-Tab> pumvisible() ? "\<C-p>" : "\<S-Tab>"

imap <localleader>j <Plug>(NextSource)
imap <localleader>k <Plug>(PrevSource)

set completeopt=menuone,noselect
set shortmess+=c

With the default g:autocomplete.chains:

Kapture 2020-09-16 at 16 42 55

With a custom g:autocomplete.chains for typescript

Kapture 2020-09-16 at 16 46 31

ahmedelgabri avatar Sep 16 '20 14:09 ahmedelgabri

@ahmedelgabri Thanks a lot for the report! I think not having editor stuck is a good new. We'll start porting some key changes back to completion-nvim and see how it goes. I think we should aim for optimizing the completion speed after the lock is disappear.

Edit: Also can you show a gif on using coc.nvim? I'm quite curious about the difference of the two, thanks in advanced!

haorenW1025 avatar Sep 16 '20 15:09 haorenW1025

I will keep it running until the end of the week to get a better understanding & maybe report more issues if it happens

ahmedelgabri avatar Sep 16 '20 15:09 ahmedelgabri

Thanks a lot. Also some changes in the fork are on the improvement of signature help and hover. Can you try using completion-nvim again with hover and signature help off and see if it still sucks? That way I'll have a better understanding on what went wrong. Sorry for keep bothering you:(

haorenW1025 avatar Sep 16 '20 15:09 haorenW1025

No worries, happy to help 😊

I was already testing completion-nvim now again because I think I found another issue (but this is with my setup) which is vim-gitgutter, it slowed down vim a lot! So after removing it completion-nvim because more responsive, a bit slow still & still get the cannot resume dead coroutine errors but much usable.

So that was interesting… I will most probably get rid of vim-gitgutter since it has way less value for me than LSP & completion.

ahmedelgabri avatar Sep 16 '20 15:09 ahmedelgabri

Hi, The typescript-language-server does not support isIncomplete in my knowledge.

But completion-nvim sends a request on each char.

I bisected for the reason. Then I found a e4dddd8e29224c667972fc33a2537a2e7e1e1a4c commit that cause it.

@ahmedelgabri Could you test with 49a2335d2f9e2c15bf597cde555ecad3bdf70663 ?

hrsh7th avatar Sep 16 '20 15:09 hrsh7th

@ahmedelgabri Could you test with 49a2335 ?

I tested that, but there was no major difference.

BTW, I can confirm that disabling vim-gitgutter or at least disabling realtime updates improved the performance greatly, I have been disabling it for a couple of days already.

Still completion.nvim is not as fast as coc.nvim but much better than before.

ahmedelgabri avatar Sep 18 '20 09:09 ahmedelgabri

Ahh okay so I think that's because vim-gitgutter might also do some timers and async stuff (so does completion-nvim), causing the event loop to stuck with each other therefore slow down neovim a lot. coc-nvim use remote API so it's still usable in this case. Nevertheless, refactored should be done. I'll start working on it whenever I have time. @ahmedelgabri I'll keep you update about the progress.

@hrsh7th About the isIncomplete flag, I think that's worth discussing in another issue. We shouldn't send request on each char if the server doesn't support isIncomplete.

haorenW1025 avatar Sep 18 '20 12:09 haorenW1025

@ahmedelgabri,

Can you temporarily switch to the LSC plugin? It is a VimScript LSP plugin. Note, it works fine in Neovim, it is not restricted to Vim 8.

This article of mine may help you get setup: https://bluz71.github.io/2019/10/16/lsp-in-vim-with-the-lsc-plugin.html

LSC apparently uses a 500ms denounce along with no support for isIncomplete.

I am keen to know if there is a difference in performance between Neovim LSP + completion.nvim and LSC. Maybe there is, maybe there isn’t, maybe LSC will be worse. But it will be a data point that could help use understand this issue.

I have opened #231 with regard to debouncing, but that is purely speculative.

Best regards.

bluz71 avatar Oct 03 '20 11:10 bluz71

@bluz71 I couldn't get the completion to work at all, I used your config, I tried to set g:lsc_auto_map to v:true & {'defaults': v:true, 'Completion': 'omnifunc'} nothing worked. Checking with set omnifunc? it's always empty. And I don't change that anywhere other than the neovim LSP config which I disabled to test LSC.

But I see that you are using gitgutter I was using this too & after removing it literally everything in my vim improved, typing became more responsive & completion perf improved too. I'd say start there first.

ahmedelgabri avatar Oct 03 '20 13:10 ahmedelgabri

Author of vim-lsp and asyncomplete.vim here.

debouncing is not enough. There are other smarter ways to fix perf which asyncomplete.vim and ncm uses which is similar to VSCode. I documented my findings of how vscode is performant at https://github.com/roxma/nvim-completion-manager/issues/30#issuecomment-283281158. You could do the same trick for completion-nvim. you might want to read the thread from start to get the best context.

prabirshrestha avatar Oct 03 '20 20:10 prabirshrestha

@bluz71 I couldn't get the completion to work at all, I used your config, I tried to set g:lsc_auto_map to v:true & {'defaults': v:true, 'Completion': 'omnifunc'} nothing worked. Checking with set omnifunc? it's always empty. And I don't change that anywhere other than the neovim LSP config which I disabled to test LSC.

That is likely an issue with respect the server command not working or not being correct. Which programming language are you using and which language server?

If you have time I am very interested in helping you get LSC working. Note, that is not because I want you to switch LSP clients to LSC, but rather I want the data point of how LSC performance compares with completion.nvim for you. Maybe we can improve the performance of this plugin?

Can you post your LSC config here thanks. My first instinct is that you may need to fully qualify the language server binary and possibly add a --lsp or --stdio type flag. I believe it should be an easy fix.

In my case I have Dart/JavaScript and Ruby LSP setups for both LSC and Neovim LSP + completion, here and here.

But I see that you are using gitgutter I was using this too & after removing it literally everything in my vim improved, typing became more responsive & completion perf improved too. I'd say start there first.

My assertion in #231 is that I notice completion.nvim uses more CPU than LSC. Hence in your case removing gitgutter helps but it may still be the case the completion.nvim (or the language server it communicates with) may be using more CPU than ideally it should (when compared with LSC). A theory that is to be determined.

As for gitgutter, I notice no issue unless I am dealing with files 200K and greater. I also configure gitgutter to use Ripgrep:

let g:gitgutter_grep = 'rg'

function! signs#Disable() abort
    :GitGutterBufferDisable
    :ALEDisableBuffer
endfunction

    autocmd BufReadPre *
      \ if getfsize(expand('%')) > 200000|
      \     call signs#Disable()|
      \ endif

Anyway, your code base and test case should be really helpful to ascertain if there are inefficiencies in completion.nvim.

Best regards.

bluz71 avatar Oct 04 '20 01:10 bluz71

Author of vim-lsp and asyncomplete.vim here.

debouncing is not enough. There are other smarter ways to fix perf which asyncomplete.vim and ncm uses which is similar to VSCode. I documented my findings of how vscode is performant at roxma/nvim-completion-manager#30 (comment). You could do the same trick for completion-nvim. you might want to read the thread from start to get the best context.

Hello again @prabirshrestha.

If completion.nvim is sending to the server every character the uses types (which is may not be doing, I am just speculating) then a debounce of 500ms surely could be of some help (LSC uses a 500ms is not worse for that). It may not be the silver bullet.

I do agree that the tips you provide in the linked post are also highlighly worthwhile, maybe even more so than debouncing.

I will copy your post over to #231 since that is the debounce issue.

This one, which may strongly relate, should focus on @ahmedelgabri's original performance issue. If we can get LSC working for him that would provide an interesting comparison. What will be the result? I have learnt with LSP anything is possible, LSC could be faster, or slower or the same.

Cheers.

bluz71 avatar Oct 04 '20 02:10 bluz71

That is likely an issue with respect the server command not working or not being correct. Which programming language are you using and which language server? […] Can you post your LSC config here thanks. My first instinct is that you may need to fully qualify the language server binary and possibly add a --lsp or --stdio type flag. I believe it should be an easy fix. […] In my case I have Dart/JavaScript and Ruby LSP setups for both LSC and Neovim LSP + completion, here and here.

That's highlight unlikely because the same server works fine when I switch back to neovim LSP, I already added --stdio, I was testing with typescript LS which is used like this typescript-language-server --stdio & that's what I had in LSC config & it's already in my $PATH

This was the setup I tested LSC with.

let g:lsc_enable_autocomplete = v:true
let g:lsc_auto_map = v:true
let g:lsc_server_commands = {'typescript': 'typescript-language-server --stdio', 'typescript.tsx': 'typescript-language-server --stdio'}

If you have time I am very interested in helping you get LSC working. Note, that is not because I want you to switch LSP clients to LSC, but rather I want the data point of how LSC performance compares with completion.nvim for you. Maybe we can improve the performance of this plugin?

I understand that 🙂

But I see that you are using gitgutter I was using this too & after removing it literally everything in my vim improved, typing became more responsive & completion perf improved too. I'd say start there first.

My assertion in #231 is that I notice completion.nvim uses more CPU than LSC. Hence in your case removing gitgutter helps but it may still be the case the completion.nvim (or the language server it communicates with) may be using more CPU than ideally it should (when compared with LSC). A theory that is to be determined.

As for gitgutter, I notice no issue unless I am dealing with files 200K and greater. I also configure gitgutter to use Ripgrep:

let g:gitgutter_grep = 'rg'

function! signs#Disable() abort
    :GitGutterBufferDisable
    :ALEDisableBuffer
endfunction

    autocmd BufReadPre *
      \ if getfsize(expand('%')) > 200000|
      \     call signs#Disable()|
      \ endif

Thanks will check that, but in my case it was not about filesize because the files I tested on in that repo were not that large yet, it choked really bad. I also had rg as gitgutter grep but that disable tip based on filesize is a nice one. I will use this anyway for other stuff maybe, thanks for the tip.

ahmedelgabri avatar Oct 04 '20 09:10 ahmedelgabri

Try this instead:

let g:lsc_server_commands = {
 \ 'typescript': { 'command': 'typescript-language-server --stdio' }, 
 \ 'typescript.tsx': { 'command': 'typescript-language-server --stdio' }
 \ }

The space between typescript-language-server and --stdio may require the use of the command directive. I always use command as noted here.

I myself use typescript-language-server with JavaScript and TypeScript code.

Also, LSC automap, from my reading, does not setup omnifunc. I recommend defining your own mappings, these are mine:

let g:lsc_auto_map = {
 \  'GoToDefinition': 'gd',
 \  'FindReferences': 'gr',
 \  'Rename': 'gR',
 \  'ShowHover': 'K',
 \  'FindCodeActions': 'ga',
 \  'Completion': 'omnifunc',
 \}

Best regards.

bluz71 avatar Oct 04 '20 10:10 bluz71

Ok, I have tested LSC again & it's indeed slightly more responsive than completion-nvim, I can keep typing without any locks or hangs.

I understand that the completion list is affected by the LSP server itself & in this case, I'm not sure anything can be done here because the project is huge & typescript-language-server is slow.

This is the configuration I used, thanks @bluz71 for the help.

let g:lsc_server_commands = {
 \ 'javascript': { 'command': 'typescript-language-server --stdio' },
 \ 'javascript.jsx': { 'command': 'typescript-language-server --stdio' },
 \ 'typescript': { 'command': 'typescript-language-server --stdio' },
 \ 'typescript.tsx': { 'command': 'typescript-language-server --stdio' }
 \ }
let g:lsc_auto_map = {
 \  'GoToDefinition': 'gd',
 \  'FindReferences': 'gr',
 \  'Rename': 'gR',
 \  'ShowHover': 'K',
 \  'FindCodeActions': 'ga',
 \  'Completion': 'omnifunc',
 \}

ahmedelgabri avatar Oct 04 '20 10:10 ahmedelgabri

I also noticed that the cannot resume dead coroutine error in completion-nvim usually happens when I'm trying to complete JSX but not normal JavaScript/TypeScript code

React. // this rarely shows this error
<ComponentName // this usually shows the error

ahmedelgabri avatar Oct 04 '20 10:10 ahmedelgabri

LSC should be slower since it is VimScript, which is orders of magnitude slower than LuaJIT.

That it is more responsive means there are likely inefficiencies in completion.nvim.

Thanks for testing.

bluz71 avatar Oct 04 '20 10:10 bluz71

Ok, I think I found another potential-performance-killing item with upstream Neovim LSP when compared with LSC.

Whilst in insert mode, as changes are being made, LSC sends minimal byte range style didChange events to the language server, for example:

{
  "method": "textDocument/didChange",
  "jsonrpc": "2.0",
  "params": {
    "contentChanges": [
      {
        "range": {
          "end": { "character": 2, "line": 8 },
          "start": { "character": 2, "line": 8 }
        },
        "text": "  @\n  ",
        "rangeLength": 0
      }
    ],
    "textDocument": { "uri": "file:///tmp/foo/foobar.rb", "version": 2 }
  }
}

Neovim LSP, from my testing, is always sending the complete file contents after every change, for example:

{
  "method": "textDocument/didChange",
  "jsonrpc": "2.0",
  "params": {
    "contentChanges": [
      {
        "text": "class FooBar\n  def initialize\n    @abc = \"abc\"\n    @hello = \"hello\"\n    @help = \"help\"\n  end\n\n  def baz\n    @\n  end\nend\n\nfoo = Foobar.new\n"
      }
    ],
    "textDocument": { "uri": "file:///tmp/foo/foobar.rb", "version": 6 }
  }
}

How will the language server know what's changed via Neovim LSP didChange? A de-difference server side?

I just tested a 3,000 line JavaScript file and Neovim LSP is sending the full 3,000 lines after every keypress in insert mode (or maybe until CursorHold?).

This seems highly inefficient to me.

LSC on the other hand is sending very small range diffs, here is a example of the packets sent whilst I was appending [1, 2, 3] into a huge JavaScript file:

{
  "method": "textDocument/didChange",
  "jsonrpc": "2.0",
  "params": {
    "contentChanges": [
      {
        "range": {
          "end": { "character": 6, "line": 419 },
          "start": { "character": 6, "line": 419 }
        },
        "text": "\n      ",
        "rangeLength": 0
      }
    ],
    "textDocument": {
      "uri": "file:///home/dennis/projects/platters_app/src/components/react.development.js",
      "version": 2
    }
  }
}

{
  "method": "textDocument/didChange",
  "jsonrpc": "2.0",
  "params": {
    "contentChanges": [
      {
        "range": {
          "end": { "character": 6, "line": 419 },
          "start": { "character": 6, "line": 419 }
        },
        "text": "[]",
        "rangeLength": 0
      }
    ],
    "textDocument": {
      "uri": "file:///home/dennis/projects/platters_app/src/components/react.development.js",
      "version": 3
    }
  }
}

{
  "method": "textDocument/didChange",
  "jsonrpc": "2.0",
  "params": {
    "contentChanges": [
      {
        "range": {
          "end": { "character": 7, "line": 419 },
          "start": { "character": 7, "line": 419 }
        },
        "text": "1, 2, 3",
        "rangeLength": 0
      }
    ],
    "textDocument": {
      "uri": "file:///home/dennis/projects/platters_app/src/components/react.development.js",
      "version": 4
    }
  }
}

Not being an LSP expert, but Neovim LSP sending the complete file buffer for every insertion change via didChange seems like it could be very bad for language server performance?

bluz71 avatar Oct 05 '20 01:10 bluz71

It seems neovim's LSP implementation problem. You can create the issue.

Shougo avatar Oct 05 '20 02:10 Shougo

I thought neovim built-in lsp supported it, but it seems to be my misunderstanding.

https://github.com/neovim/neovim/blob/master/runtime/lua/vim/lsp.lua#L813

hrsh7th avatar Oct 05 '20 02:10 hrsh7th

@hrsh7th,

I logged Ruby and JavaScript language servers with small or huge files, Neovim is always sending the full buffer payload.

And it happens pretty frequently whilst a user is typing in new content. If one types slow enough it is after every key press.

The original poster of this issue is dealing with a big repo and maybe even big files, so the current didChange behaviour could well be a factor.

Note, I think other strategies such as debouncing are still worth exploring in completion.nvim.

Best regards.

bluz71 avatar Oct 05 '20 02:10 bluz71

Very sorry for the late replied(again). @bluz71 If debouncing means a timers intervals of checking changed event, completion-nvim already have that and it's in fact a configurable variable g:completion_timer_cycle, the default value is 80 though which is much lower than it's in LSC. The isIncomplete issue should be fixed though and I'll be working on it.

@prabirshrestha Thanks a lot for the information. completion-nvim is highly inspired by asyncomplete.vim, so thanks for your work on that:)

haorenW1025 avatar Oct 15 '20 12:10 haorenW1025

@hrsh7th May I ask you how do you find out the server is not supporting isIncomplete? For sumneko_lua it just keep sending isIncomplete=True and it's quite annoying.

haorenW1025 avatar Oct 15 '20 13:10 haorenW1025