Add support for LM Studio endpoints
LM Studio provides a local API for interacting with LLMs. Adding support for LM Studio endpoints will allow users to integrate their locally hosted models seamlessly.
sure, good idea. I tested overwriting openai configs but i got error like "Api expects list but received object". LM Studio has open ai compatible apis, i don't why it does not work. Any idea?
my configurations
try different models or check your system prompt - saw similar when trying out bunch of models. - Edit: most of the models I tried worked but only 14b+ parameter ones managed to do something decent
@Martynienas can you please share use used prompt template and the used model as well?
Hi @usmandilmeer,
Thank you for your suggestion to add support for LM Studio endpoints. This is a valuable feature request that aligns well with our goal of expanding LLM support.
From the comments, it seems there are some challenges with using LM Studio's OpenAI-compatible APIs. We recommend:
- Checking Configurations: Ensure that the API configurations match the expected formats. Sometimes, errors like "Api expects list but received object" can be due to misconfigurations.
- Model Compatibility: As suggested by @Martynienas, trying different models or adjusting the system prompt might help. If you have a working configuration, sharing it could benefit others.
- Community Input: If anyone has successfully integrated LM Studio, please share your setup and any tips you might have.
We will look into this further and update the documentation once a reliable integration method is established.
Thank you for your contribution and patience!
Really interested in this as i tend to use LM instead of Ollama, and while it seems to work fine with ollama, using lm endpoint with openai selected leads to...
but i seem to crash out with...
Error code: 400 - {'error': "Invalid tool_choice type: 'object'. Supported string values: none, auto, required"} INFO [agent] 📍 Step 1 INFO [agent] Planning Analysis:
{
"state_analysis": "The browser is open with a new tab, but no pages have been loaded yet. The current URL is 'about:blank', indicating that the task of retrieving CNN headlines has not started.",
"progress_evaluation": {
"percentage": 0,
"description": "No progress has been made towards retrieving CNN headlines as no navigation to the CNN website has occurred."
},
"challenges": [
"The browser is currently on a blank page and needs to be navigated to the CNN website."
],
"next_steps": [
"Open a new tab or window in the browser.",
"Navigate to the CNN website using the URL 'https://www.cnn.com'."
],
"reasoning": "To retrieve CNN headlines, the first step is to open a new tab or window and navigate to the CNN website. This will allow access to the site's content where headlines can be found."
}
ERROR [agent] ❌ Result failed 3/3 times:
not sure why the tool_choice is borking out, i'm using internvl3-14b-instruct which i'd imagine could do this, as its tool trained and vision i have it set as planner and base model both with vision enabled and tools set to auto
+1
still +1
+1
我把导入换掉就能用lm studio的api了,from browser_use.llm import ChatDeepSeek 换成 ChatOpenAI 就能用了
yah would be nice to know that before i spent 3 hours trying to run it with LM studio ......
brutal... so still no solution?