wenis
Results
2
comments of
wenis
Very excited for this one. Probably a lot of us running local LLMs and would just use llama.cpp for serving up an LLM instance and embeddings.
> once you guys merge in the ollama ill refactor it and add support for docker local llm models aswell if you'd like, been testing the docker llm models all...