maccarone
maccarone copied to clipboard
Any plans adding support for llama2.cpp?
It would be better to run code-llama2 locally
I agree that it would be interesting to try other models, especially local models. Happy to accept patches. I won't have time to implement this myself in the near future, though.