gpt4all
gpt4all copied to clipboard
Add Llama-3.1-405B-FP8
Feature Request
will you do us the honors of testing on your local machine
It will be a pleasure
We're working on this asap. It will work with the newest version which will be released shortly. Upstream llama.cpp is working on providing better support as well and when they do we'll be merging it in and making a new release as well.
Thank you! Got it working in the UI no problem! Then I grabbed the filename it downloaded when I added the model and tested it in the GPT4all ux and everything worked.
Then I replaced it in my code model = GPT4All("Meta-Llama-3.1-8B-Instruct.Q4_0.gguf")
and I get this error:
LLAMA ERROR: failed to load model from /Users/davidsmith/.cache/gpt4all/Meta-Llama-3.1-8B-Instruct.Q4_0.gguf LLaMA ERROR: prompt won't work with an unloaded model!
The python binding release has not been made to support it. Probably not till next week as python binding maintainer is on vacation.
Can be closed, I guess? If still doesn't work, you can re-open.