S. Dale Morrey
S. Dale Morrey
There's another project that relies on this one under the hood. It's called Auto-GPT. I believe it can search the web.
Not on a 4GB, on 8GB it might be possible. I have an older laptop that's about the same specs and it's pegged out on all 4 cores and all...
It's possible you have them disabled in the bios. But I also didn't realize you were running windows. It does seem to be an issue on windows builds that there...
I don't think your AI crashed because it didn't want to talk. Most likely you ran out of memory and it segfaulted. You should submit a crash dump if you...
Recompile it manually. It will be slow though because the instruction you're missing is AVX or AVX2.
You're fine. I encountered this exact same issue. The AVX instruction set is not present on budget processors. You'll need to recompile the chat portion locally so that it can...
It's sort of an exercise to sort these issues. The process in general is to fork the repo, do a git pull to your computer and run "make" from there...
Looks like I'm missing the AVX instruction on my CPU. ``` username@computer:/opt/gpt4all 0.1.0/bin$ grep -E 'avx|avx2' /proc/cpuinfo username@computer:/opt/gpt4all 0.1.0/bin$ sudo lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 39...
I was able to solve this by recompiling from source following the directions here: https://github.com/zanussbaum/gpt4all.cpp Then starting with a -m param to choose the model Slow as dirt on my...
Definitely works in some unexpected ways... ``` username@computer:~/Projects/aixcelus/gpt4all-build/gpt4all.cpp$ ./chat -m ../../gpt4all/gpt4all-lora-unfiltered-quantized.bin main: seed = 1681707857 llama_model_load: loading model from '../../gpt4all/gpt4all-lora-unfiltered-quantized.bin' - please wait ... llama_model_load: ggml ctx size = 6065.35...