companion icon indicating copy to clipboard operation
companion copied to clipboard

Let users choose between local/hosted inference & cloud APIs

Open frgfm opened this issue 11 months ago • 4 comments

It makes sense that some users won't have compatible hardware to run LLMs locally. In that case, they might want to use external APIs for this.

It could be interesting to provide the following options:

  • [x] Ollama (#101)
  • [x] OpenAI (#163)
  • [ ] Mistral
  • [ ] Claude
  • [ ] Gemini
  • [x] Groq (#157)

frgfm avatar Mar 15 '24 15:03 frgfm

I agree.

bright258 avatar May 10 '24 23:05 bright258

I agree.

@bright258 which LLM provider would you like to use most? We now have full support of Groq & Ollama

frgfm avatar May 13 '24 06:05 frgfm

OpenAI

bright258 avatar May 13 '24 07:05 bright258

OpenAI

@bright258 Done :) Just merged #163

frgfm avatar May 15 '24 11:05 frgfm