continue
continue copied to clipboard
Adding Local LLM, LM Studio not working. Doesn't show up in list.
Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the Continue Discord for questions
- [X] I'm not able to find an open issue that reports the same bug
- [X] I've seen the troubleshooting guide on the Continue Docs
Relevant environment info
- OS: Windows
- Continue: v0.9.110 (pre-release)
- IDE: 1.89.0-insider
Description
Adding Local LLM, LM Studio not working. Doesn't show up in list.
To reproduce
- Initially added manually using LM studio ref from docs. Tried to add with GUI but the screen only showed for split second then closed not even long enough for me to see what it was.
- open settings.json. Save new model { "title": "LM Studio", "provider": "lmstudio", "model": "llama2-7b" }
- Worked initially when i saved and still had setting file open.
- Closed setting file. everything was working.
- Reopened settings file using continue chat dialog setting icon to look at tabAutocompleteModel setting.
- Noticed that all trial models and local lm studio model i just added was missing from setting file. It was replaced by ollama setting which i didn't initiate.
- Still i am able to use local model at this point until i restart VS Code.
- Reopened settings with same issue and now i can't run local llm.
- Tried adding lm studio through GUI. This time i can see the screen and try to add LM Studio with auto detect.
- Adds LM Studio below Ollama setting i didn't add. LM Studio will not show in list. neither does Ollama setting.
- Delete settings.json. reinstall extension. trial setting return. add LM Studio from Gui again. Still now showing in model list. Can't select local llm to que even with setting present in file.
Log output
No response
@NiceShyGuy It sounds like there are a few separate problems. I hope you don't mind that I have a few extra questions about the first two
Tried to add with GUI but the screen only showed for split second then closed not even long enough for me to see what it was.
Does this mean that the sidebar changed back to the main view immediately after selecting a model? Would you be able to share a screenshot or video of what you mean?
Noticed that all trial models and local lm studio model i just added was missing from setting file. It was replaced by ollama setting which i didn't initiate.
Did this happen after seeing a "Keep existing config" vs. "Use optimized models" screen, like in the screenshot here?
Adds LM Studio below Ollama setting i didn't add. LM Studio will not show in list.
When you use the AUTODETECT option, it will call the /v1/models endpoint of the LM Studio server, and fill the dropdown with all of the models you are currently running. If you don't currently have any models running, then the dropdown will be empty. If you want to use AUTODETECT, make sure to set up the local inference server and load a model
Does this mean that the sidebar changed back to the main view immediately after selecting a model? Would you be able to share a screenshot or video of what you mean?
On my very first startup in main sidebar view, my first action was to click the + button to add a model. I didn't not know what the + button did at the time because the new view would never show. So i resorted to editing the setting.json manually during the same session as the view glitch.
Did this happen after seeing a "Keep existing config" vs. "Use optimized models" screen, like in the screenshot here?
i don't remember when this screen appeared but i would have selected the the keep existing setup option as i was going for a full local setup. If this is the first view seen in the sidebar i probably would likely have selected this first and then tried to use the + button to add a model and run into the view glitch. Its possible that i did this the other way around though.
When you use the AUTODETECT option, it will call the /v1/models endpoint of the LM Studio server, and fill the dropdown with all of the models you are currently running. If you don't currently have any models running, then the dropdown will be empty. If you want to use AUTODETECT, make sure to set up the local inference server and load a model
LM Studio had a model loaded and was running the Local Inference Server the entire session. It worked manually at first until the settings issue occurred. It never auto detected my running server after multiple restarts of VS Code Insider. Continues to not auto detect my running server today. manual config working.