continue
continue copied to clipboard
Continue dev plugin: autocompletion is making VS-Code slow
Before submitting your bug report
- [x] I believe this is a bug. I'll try to join the Continue Discord for questions
- [ ] I'm not able to find an open issue that reports the same bug
- [ ] I've seen the troubleshooting guide on the Continue Docs
Relevant environment info
- OS:20.04
- Continue version: v1.0.5
- IDE version: 1.99.0
- Model: Codestral
- config:
name: Local Assistant
version: 1.0.0
schema: v1
models:
- name: Gemini 2.0 Flash
provider: gemini
model: gemini-2.0-flash
apiKey: xxx...
- name: Codestral
provider: mistral
model: codestral-latest
apiKey: yyy...
roles:
- autocomplete
defaultCompletionOptions:
temperature: 0.3
stop:
- "\n"
rules:
- Give concise responses
- Always assume TypeScript rather than JavaScript
context:
- provider: code
- provider: docs
- provider: diff
- provider: terminal
- provider: problems
- provider: folder
- provider: codebase
Description
VS Code became very laggy when continue dev plugin is enabled, and I'm writing something to a file, and became normal when I disable the plugin, so the problem is with the autocomplete.
For example, while editing a file, saving a document (Ctrl + s) will not work only after multiple attempts, same for copying and pasting (Ctrl + c, Ctrl + v). The inline suggestions and continue keep loading (see below picture):
To reproduce
No response
Log output
Continue is cool, but It's so painful experience :(
Same issue on my side. I use qwen2.5-coder:1.5b with ollama for autocomplete though.
I use "editor.inlineSuggest.enabled": false and bind command + H to editor.action.inlineSuggest.trigger.
This way I could use autocompletion whenever I want.
@ShaySheng Thanks that partially helps, but I still prefer the default autocomplete without any biding, it helps accelerate my coding and writing
very slow with OLLAMA
Triaging issues and linking some that could be similar.
| Related Issues |
|---|
| #3616: The plugin responds slowly if you turn autocomplete on and off more than 10 times in a row |
| #5716: Continue AI Chat model + Autocomplete Running incredibly slow |
| #6570: When using Autocomplete VS Code fuction list is very slow |
I stopped using continue about six months ago because of this lag issue. Yesterday I tried using it again, but the problem still exists. Unfortunately, I have deactivated it again.
Hey @rkmax curious to know more about your setup. Was this with an Ollama model? We have fixes coming for Ollama performance improvement, but it would be good to know if this is something else
For me some times it autocomplete, other times it does not, or I guess its just taking way to long to show anything, because if I wait for some time eventually it seems to suggest something but its just very slow
Tested with
qwen-2.5-coder:7b and qwen2.5-coder:1.5b (ollama) and also gemini-flash-2.0
On Macbook Pro M4 Max 48GB and I'm only using the autocomplete option no models assigned to chat.
I ran into the same issue it seems, but was able to improve the behavior by increasing the timeouts in the continue.dev settings (you have to scroll down on the main settings page):
Autocomplete Timeout (ms): 350 Autocomplete Debounce (ms): 500
While still not as smooth as e.g. copilot autocomplete, I can now get consistent (and usable) suggestions. Model: qwen-coder:1.5b Machine: Macbook Pro M4 Max