Shantanu
Shantanu
Curious about your use case for local LLMs vs OpenAI ?
Thanks for sharing! Local Redaction and support for local LLMs is planned and I'm tracking it on our public feedback board [here](https://savvy.featurebase.app/?b=66a299f74c55240443f380de)
Hi @av, quick update: I've moved away from using OpenAI for generating runbooks and we now use Llama3.1 hosted on Groq. Savvy ask/explain still uses GPT4-o for now.
@sjuxax and @av getting started on this - should have something for y'all to try v soon.
@sjuxax and @av you can now BYO LLM with Savvy's CLI. See the docs here: https://docs.getsavvy.so/guides/byo_llm/ Implemented in #154
Hi @Yeraze! Thanks for the feature request, I'm curious about your use case for Savvy on a raspberry pi?
Could you also share the output of these commands from your raspberry pi: ``` os=$(uname -s | tr '[:upper:]' '[:lower:]') arch=$(uname -m) echo $os echo $arch ```
Closing for now, feel free to reopen.