companion
companion copied to clipboard
Let users choose between local/hosted inference & cloud APIs
It makes sense that some users won't have compatible hardware to run LLMs locally. In that case, they might want to use external APIs for this.
It could be interesting to provide the following options:
- [x] Ollama (#101)
- [x] OpenAI (#163)
- [ ] Mistral
- [ ] Claude
- [ ] Gemini
- [x] Groq (#157)
I agree.
I agree.
@bright258 which LLM provider would you like to use most? We now have full support of Groq & Ollama
OpenAI
OpenAI
@bright258 Done :) Just merged #163