gpt4all
gpt4all copied to clipboard
Installing models
Issue you'd like to raise.
After downloading some of the models to install them, they file wont install and it disappear from the download location then it shows download again
now i decided to look for the bin file online to download it then copy it to the folder how do i install it
Suggestion:
No response
There are links on the homepage, scroll down to "Model explorer".
Programmatically, look at e.g. the Python bindings, they rely on this JSON: https://gpt4all.io/models/models.json
Installing them is just moving them to the appropriate folder.
Maybe there should be some documentation or ui clarity updates on this. I can’t figure it out either.
A button in the downloads page would work well. It'd open up a file browser and you would click on a model to import.
@spacecowgoesmoo That is a good idea. Apparently, you can do that manually at the moment as mentioned in this issue: https://github.com/nomic-ai/gpt4all/issues/722#issuecomment-1564110399 You go to the "Model Explorer" section of the home page on the gpt4all website and select the model you want, then download it via your browser or a download manager. Then, copy it into the folder referenced as you Download the path in the Chat UI and it should be automatically detected. I hope a user-friendlier way is implemented soon.
There are links on the homepage, scroll down to "Model explorer".
Programmatically, look at e.g. the Python bindings, they rely on this JSON: https://gpt4all.io/models/models.json
Installing them is just moving them to the appropriate folder.
I'm just putting a downloaded model into the models folder. In this case, the "mpt-7b-storywriter.ggmlv3.q4_0.bin". It appears in the UI to be selected, among the others download directly trough the UI. But when I select it, it just gets stucked in "loading model". RAM and CPU dont show any use in task manager.
It seems to be only working with the models downloaded via UI.
I appreciate if someone can help in this.
I'm just putting a downloaded model into the models folder. In this case, the "mpt-7b-storywriter.ggmlv3.q4_0.bin". It appears in the UI to be selected, among the others download directly trough the UI. But when I select it, it just gets stucked in "loading model". RAM and CPU dont show any use in task manager.
It seems to be only working with the models downloaded via UI.
I appreciate if someone can help in this.
Not entirely sure, but I think that model might not be supported at the moment. It's not an MPT model from the website, at least.
You cannot just download any random model from somewhere on the internet and expect them to work. There are many different binary formats (and even versions).
Typically, LLaMA based models from other sources with ggml
and v3
in their name should likely work, though.
Thank you for the fast reply!
Not entirely sure, but I think that model might not be supported at the moment. It's not an MPT model from the website, at least.
Ok. I chose this model because supports 65k tokens as input. Good for longer texts.
You cannot just download any random model from somewhere on the internet and expect them to work. There are many different binary formats (and even versions).
I'm naively thought that was just the case of dropping models with ggml format.
Typically, LLaMA based models from other sources with
ggml
andv3
in their name should likely work, though.
Ok, I will try with some other ggml and v3 model.