DeepCode
DeepCode copied to clipboard
[Question]: Do you sopport local LLMs (ollama and LM Studio) ?
Do you need to ask a question?
- [x] I have searched the existing question and discussions and this question is not already answered.
- [x] I believe this is a legitimate question, not just a bug or feature request.
Your Question
Hello, my question is simple , do you support local LLM manager such Ollama or LM Studio ? I tried to put http://localhost:1234 in my configuration but I obtain loads of errors so I think this is not supported. Thank you
Additional Context
No response
The documentation mentions that the underlying system uses the OpenAI SDK. For local deployment, you can use tools like vLLM or other compatible methods to deploy in an OpenAI-compatible manner. Simply configure the api_key as 'EMPTY' and set the base_url to the service address where your model is deployed, and it should work.
The documentation mentions that the underlying system uses the OpenAI SDK. For local deployment, you can use tools like vLLM or other compatible methods to deploy in an OpenAI-compatible manner. Simply configure the
api_keyas 'EMPTY' and set thebase_urlto the service address where your model is deployed, and it should work.
Thank you very much for your prompt response and clear explanation!
The documentation mentions that the underlying system uses the OpenAI SDK. For local deployment, you can use tools like vLLM or other compatible methods to deploy in an OpenAI-compatible manner. Simply configure the
api_keyas 'EMPTY' and set thebase_urlto the service address where your model is deployed, and it should work.
Have you already tried Ollama? Which model specifically did you use? Does it work well for you?
Ollama won't work; it must be a tool support deploy model in Openai-Compatible way, such as vLLM. As you mentioned, LM Studio seems to support.
ok, thank you.