gpt4all icon indicating copy to clipboard operation
gpt4all copied to clipboard

Mac M1 command line usage, returning garbled code

Open liudaolunboluo opened this issue 1 year ago • 3 comments

I downloaded the gpt4all la quantized.bin file and copied it to the chat directory. I also ran gpt4all la quantized-OSX-m1 and successfully saw the following:

main: seed = 1680837810
llama_model_load: loading model from 'gpt4all-lora-quantized.bin' - please wait ...
llama_model_load: ggml ctx size = 6065.35 MB
llama_model_load: memory_size =  2048.00 MB, n_mem = 65536
llama_model_load: loading model part 1/1 from 'gpt4all-lora-quantized.bin'
llama_model_load: ............................ done
llama_model_load: model size =  3134.53 MB / num tensors = 230

system_info: n_threads = 4 / 8 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | 
main: interactive mode on.
sampling parameters: temp = 0.100000, top_k = 40, top_p = 0.950000, repeat_last_n = 64, repeat_penalty = 1.300000


== Running in chat mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to LLaMA.
 - If you want to submit another line, end your input in '\'.

But he doesn't seem to be working properly,He replied to me with the following content, which should be garbled code

> how much RAM would I need?
$

> hello?
!	
"$
"#"	$

How should I make him work properly? Thank you very much

liudaolunboluo avatar Apr 07 '23 03:04 liudaolunboluo

I just installed it and it is working well: image

Cayan avatar Apr 08 '23 16:04 Cayan

I think you are using an older version. Look at the llama model he posted in his version versus yours. I think you either have an old one, or you have a corrupted bin. Redownload the 4GB bin, and then redo it.

Question. Anyone using an M1 Macbook? Mine has been running super slow. I am trying to see why its running slower than it showed originally.

joeyricard84 avatar Apr 11 '23 13:04 joeyricard84

I was using the latest available when I posted, and for me, it worked well. I tested it on an M2 chip.

Cayan avatar Apr 11 '23 21:04 Cayan

Stale, please open a new issue if this is still relevant.

niansa avatar Aug 11 '23 11:08 niansa