savvy-cli icon indicating copy to clipboard operation
savvy-cli copied to clipboard

Local LLMs Support

Open av opened this issue 1 year ago • 4 comments

Hi, thanks for building and opening Savvy!

Is there any way I can configure it to use a locally-running LLM? With OpenAI-compatible API or otherwise.

Thanks!

av avatar Aug 09 '24 09:08 av

Curious about your use case for local LLMs vs OpenAI ?

joshi4 avatar Aug 09 '24 14:08 joshi4

Usage in no-network conditions, data protection, ability to choose specific models for specific kind of workload (for example fine-tuned)

Nothing unique, just a "local LLM" use-case

av avatar Aug 09 '24 14:08 av

Thanks for sharing!

Local Redaction and support for local LLMs is planned and I'm tracking it on our public feedback board here

joshi4 avatar Aug 12 '24 03:08 joshi4

Hi @av,

quick update:

I've moved away from using OpenAI for generating runbooks and we now use Llama3.1 hosted on Groq.

Savvy ask/explain still uses GPT4-o for now.

joshi4 avatar Aug 23 '24 17:08 joshi4

I'm interested in this too. Would be great if it respected something like OPENAI_API_BASE for the backend to query. Just poked through the code (for about 10 seconds, so take it with a grain of salt) and it looks like all the prompting etc is done server-side, so this would require replicating that logic here.

sjuxax avatar Dec 09 '24 15:12 sjuxax

@sjuxax and @av getting started on this - should have something for y'all to try v soon.

joshi4 avatar Dec 13 '24 17:12 joshi4

@sjuxax and @av you can now BYO LLM with Savvy's CLI. See the docs here: https://docs.getsavvy.so/guides/byo_llm/

Implemented in #154

joshi4 avatar Dec 21 '24 03:12 joshi4