LLM Module planned?
Hey there,
have you planned to integrate either of the available LLM modules? something like GPT4All, pyllama, llama.py or llama-cop-python? It would be cool to be able to integrate even the 7B models into some small scripts. Or just in general muck about.
I'm going to think about it.
Lovely to hear! Thanks for the reply.
I just tested in on my mac and I underestimated how much performance it really needs. Not only did a prompt like „name the most common pets“ take about 6 minutes, other prompts were straight up garbage answers (like Name the planets in the solar system. Answer after 4 minutes „1.A“ 😂) Don’t bother with it - it’ll not be worth it Performance wise for the next few years IMO. ✌️