continue icon indicating copy to clipboard operation
continue copied to clipboard

Continue dev plugin: autocompletion is making VS-Code slow

Open AIxHunter opened this issue 8 months ago • 10 comments

Before submitting your bug report

Relevant environment info

- OS:20.04
- Continue version: v1.0.5
- IDE version: 1.99.0
- Model: Codestral
- config:
  
name: Local Assistant
version: 1.0.0
schema: v1
models:
  - name: Gemini 2.0 Flash
    provider: gemini
    model: gemini-2.0-flash
    apiKey: xxx...
  - name: Codestral
    provider: mistral
    model: codestral-latest
    apiKey: yyy...
    roles:
      - autocomplete
    defaultCompletionOptions:
      temperature: 0.3
      stop:
        - "\n"

rules:
  - Give concise responses
  - Always assume TypeScript rather than JavaScript


context:
  - provider: code
  - provider: docs
  - provider: diff
  - provider: terminal
  - provider: problems
  - provider: folder
  - provider: codebase

Description

VS Code became very laggy when continue dev plugin is enabled, and I'm writing something to a file, and became normal when I disable the plugin, so the problem is with the autocomplete.

For example, while editing a file, saving a document (Ctrl + s) will not work only after multiple attempts, same for copying and pasting (Ctrl + c, Ctrl + v). The inline suggestions and continue keep loading (see below picture):

Image

To reproduce

No response

Log output


AIxHunter avatar Apr 08 '25 16:04 AIxHunter

Continue is cool, but It's so painful experience :(

exxocism avatar Apr 09 '25 03:04 exxocism

Same issue on my side. I use qwen2.5-coder:1.5b with ollama for autocomplete though.

d4rkc0de avatar Apr 09 '25 13:04 d4rkc0de

I use "editor.inlineSuggest.enabled": false and bind command + H to editor.action.inlineSuggest.trigger. This way I could use autocompletion whenever I want.

ShaySheng avatar Apr 09 '25 18:04 ShaySheng

@ShaySheng Thanks that partially helps, but I still prefer the default autocomplete without any biding, it helps accelerate my coding and writing

AIxHunter avatar Apr 09 '25 20:04 AIxHunter

very slow with OLLAMA

AwaisBinKaleem avatar Aug 05 '25 22:08 AwaisBinKaleem

Triaging issues and linking some that could be similar.

Related Issues
#3616: The plugin responds slowly if you turn autocomplete on and off more than 10 times in a row
#5716: Continue AI Chat model + Autocomplete Running incredibly slow
#6570: When using Autocomplete VS Code fuction list is very slow

bdougie avatar Aug 08 '25 17:08 bdougie

I stopped using continue about six months ago because of this lag issue. Yesterday I tried using it again, but the problem still exists. Unfortunately, I have deactivated it again.

rkmax avatar Sep 04 '25 00:09 rkmax

Hey @rkmax curious to know more about your setup. Was this with an Ollama model? We have fixes coming for Ollama performance improvement, but it would be good to know if this is something else

sestinj avatar Sep 12 '25 19:09 sestinj

For me some times it autocomplete, other times it does not, or I guess its just taking way to long to show anything, because if I wait for some time eventually it seems to suggest something but its just very slow

Tested with

qwen-2.5-coder:7b and qwen2.5-coder:1.5b (ollama) and also gemini-flash-2.0

On Macbook Pro M4 Max 48GB and I'm only using the autocomplete option no models assigned to chat.

mike-pt avatar Oct 08 '25 09:10 mike-pt

I ran into the same issue it seems, but was able to improve the behavior by increasing the timeouts in the continue.dev settings (you have to scroll down on the main settings page):

Autocomplete Timeout (ms): 350 Autocomplete Debounce (ms): 500

While still not as smooth as e.g. copilot autocomplete, I can now get consistent (and usable) suggestions. Model: qwen-coder:1.5b Machine: Macbook Pro M4 Max

JoschD avatar Nov 14 '25 10:11 JoschD