gpt4all
gpt4all copied to clipboard
Support falcon models
Feature request
I'm not sure if it's already there, but since GPT4ll is mentioned at Falcon-7B-Instruct I think it's not gonna be hard to be implemented, right?
Motivation
A better performance, accuracy since falcon is outperforming most other models on NLP benchmarks.
Your contribution
I'm not sure if I can, neither if I have permissions!
I may be wrong, but it's probably related to if/when the model will be supported in [llama.cpp](https://github.com/ggerganov/llama.cpp/issues/1602
Update: Was indeed wrong :)
We don't need llama.cpp to support falcon for us to support it. Notice we support MPT and GPTJ :) It's being worked on.