continue
continue copied to clipboard
Can't make it work with LMStudio
Before submitting your bug report
- [x] I believe this is a bug. I'll try to join the Continue Discord for questions
- [x] I'm not able to find an open issue that reports the same bug
- [x] I've seen the troubleshooting guide on the Continue Docs
Relevant environment info
- OS: Windows
- Continue: v0.8.25
- IDE: VsCode
I'm using LMStudio with mistral or phi or llama3 whatever
Description
Hello,
When using LMStudio as result is the same, in the chatbot and in file I always get the system prompt back and all whatever I try on the prompt formating is not working.
Really need help here. It's working with ollama and external apis.
Thanks
To reproduce
No response
Log output
No response
@tthierryEra can you share your config.json so I can better help debug?
Hello Thank you
I did not changed anything else than adding models { "title": "LM Studio - Llama 8G", "provider": "lmstudio", "model": "lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF" }, { "title": "Crusoeai Llama 8B - 262k", "provider": "lmstudio", "model": "crusoeai/Llama-3-8B-Instruct-262k-GGUF" },
@tthierryEra thanks. I think maybe the best place to check would be in the "Output" tab next to the VS Code terminal, and then in the dropdown on the right select "Continue - LLM Prompts/Completions". This will show the exact prompt send to the LLM. This in addition to the logs that you see on the side of LM Studio will likely show us what we need to see (it's definitely just a prompt formatting mistake).
The first possible solution I can think of is to double-check your prompt formatting settings on the LM Studio side. I believe you can edit these in the right side panel
So from my testing, whenever I add a system prompt wheter it's directly in LMStudio or in continue.dev code like config.ts, that system prompt will be printed back to the chat.
If I leave everything empty it's "working" but really not useful as the llm does not know how to react/interact.
Exemple:
Output from continue
Settings:
contextLength: 4096
model: MaziyarPanahi/WizardLM-2-7B-GGUF
maxTokens: 1024
<user>
hello
Completion:
You are a useful coding assistant.hello! How can I assist you today?
As a coding assistant, I'm here to help with a wide range of programming-related questions and tasks. Whether you need assistance writing or understanding code, debugging issues, learning about best practices, or exploring new technologies, feel free to ask. What do you need help with today?
I tried to leave everything empty. Or add
Settings:
contextLength: 4096
model: MaziyarPanahi/WizardLM-2-7B-GGUF
maxTokens: 1024
############################################
<user>
hello. How's it going?
==========================================================================
==========================================================================
Completion:
You are a useful coding assistant.<user>hello. How's it going?</user>Hello there! I'm doing well, thank you. It's always a pleasure to assist with any questions or tasks you might have. How can I help you today?
Maybe I'm doing something wrong?
I tried this
Or all empty
Not working still have this as result
I just ran into this problem and fixed it on my end. The broken behavior happens when running LM Studio 0.2.22. Reverting back to the 0.2.21 version has fixed the prompting formatting.
You closed the issue because the solution is to downgrade LMStudio? Or is there a fix in continue?
I just ran into this problem and fixed it on my end. The broken behavior happens when running LM Studio 0.2.22. Reverting back to the 0.2.21 version has fixed the prompting formatting.
@xinnod Where did you found the old version? Thanks