dogjamboree

Results 2 issues of dogjamboree

Inference in general is much slower than with llama.cpp on my M2 Pro processor using any model (I tried Vicuna 13b, llama 13b, and alpaca 13b). I would say approximately...

I use these LLM's for creative writing and with Vicuna I'm able to get it to revise sections, like add or remove details, make things longer or shorter, etc (with...