LM Studio support
Great project.
I’m setting up a full local environment on Windows 11 using WSL2 for development. I’ve discovered that installing LM Studio directly on Windows allows me to take full advantage of the GPU.
So far, I’ve managed to run a local Docker instance of Supabase and would love to see LM Studio supported by Archon.
Additionally, since Ollama lets you specify a base URL, could setting it to http://localhost:1234/v1 connect to the LM Studio server?
Also, please consider adding an Update section to the main document. Does it require another docker-compose up --build -d after pulling the updates?
shouldn't be hard to integrate, i added openrouter to my version of the repo along with a separate embeddings provider selector.
shouldn't be hard to integrate, i added openrouter to my version of the repo along with a separate embeddings provider selector.
That is what I want. @Chillbruhhh, could you please provide your solution? I am not getting extra options other than OpenAI, Google Gemini, and Ollama (Coming Soon)!
shouldn't be hard to integrate, i added openrouter to my version of the repo along with a separate embeddings provider selector.
![]()
That's awesome @Chillbruhhh !
shouldn't be hard to integrate, i added openrouter to my version of the repo along with a separate embeddings provider selector.
![]()
I would love to try this before it is released on Archon's official repo. Would you like to share the updates you have made?
sure @ubjayasinghe let me button it up and ill post my branch here, ill also submit it for pr
A couple of points got my local setup working that might help someone else out there, or be considered for adding to the docs:
- local Supabase in Docker Desktop: try
SUPABASE_URL=http://host.docker.internal:8000ifSUPABASE_URL=http://localhost:8000is not working (I was gettingcredential erroron Archon-Server start-up), or you might need the local host IP address (I haven't tested this, but it was a suggested solution). - Choosing
Ollamaas the LLM provider and changing the Base URL to `http://localhost:1234/v1 connected to LM Studio running in Windows. Maybe the option should be renamed as Local LLM (Ollama, LM Studio, etc.), or just the LM Studio option with the URL added.
shouldn't be hard to integrate, i added openrouter to my version of the repo along with a separate embeddings provider selector.
I would love to try this before it is released on Archon's official repo. Would you like to share the updates you have made?
https://github.com/Chillbruhhh/Archon/tree/feature/openrouter-support
I added openrouter because of openAIs tier rate limiting. I ran into this issue when i continuously crawled documentation with coles crawl4ai-rag mcp. feel free to try out the unofficial version
lm-studio's api is openai compatible, it should be trivial to use the openai connector and just change the baseurl