gpt4all
gpt4all copied to clipboard
How can I use Wizard-Vicuna-30B-Uncensored-GGML?
Feature request
Is there a way to put the Wizard-Vicuna-30B-Uncensored-GGML to work with gpt4all?
Motivation
I'm very curious to try this model
Your contribution
I'm very curious to try this model
If I'm not mistaken, this is due to an architecture change, and will be supported in the next release.
I would also like to test out these kind of models within GPT4all. Is it even possible to place manual model files in the folders and make them show up in the GUI? I guess if that is possible, we can only use certain.bin files and not .safetensors files, right?
If I'm not mistaken, this is due to an architecture change, and will be supported in the next release.
Not sure when exactly, but yes I'd say you're right. In fact, I'm running Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_1.bin right now. But not with the official chat application, it was built from an experimental branch. I don't know what limitations there are once that's fully enabled, if any.
I would also like to test out these kind of models within GPT4all. Is it even possible to place manual model files in the folders and make them show up in the GUI? I guess if that is possible, we can only use certain.bin files and not .safetensors files, right?
It's possible, but they need to have the right format. And there was a breaking change to the format earlier this month, so there are incompatibilities. But as mentioned, the devs are trying to resolve that.
There are some converter scripts somewhere to enable models other than the ones offered through the downloader I think, but I've never tried that myself.
Works now.
Hi where can i download that Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_1.bin couldnt find the link anywhere
Uh, I added that link to the repository? Not sure how you cannot find that.
But as I said, it was not with the official chat application but a custom build on an experimental branch.
Uh, I added that link to the repository? Not sure how you cannot find that.
But as I said, it was not with the official chat application but a custom build on an experimental branch.
i am confused how i couldnt find it before, sorry for that :)