gpt4all-chat
gpt4all-chat copied to clipboard
Slow response, low resources consumer
Thanks for this to be available but it took me almost a minute to answer my simple Hi prompt, In complex tasks also it get terminated after like 5 minutes, I checked windows task manager but it consumes just 50 percent CPU and just 30 percent Ram, why is that for? Can't you make it faster?
You'll need to provide more computer specs to make this question answerable.
Hi, Same issue here, running GPT4ALLggml-vicuna-13b-1.1-q4_1.bin, with Ubuntu 22.04 with lxqt.
The 13B parameter models will be very slow to run on CPU regardless of your underlying chipset. Running on M1 or M2 Macs will be your best option or using the models on a GPU.
On Tue, Apr 25, 2023, 9:51 AM DrGood01 @.***> wrote:
Hi, Same issue here, running GPT4ALLggml-vicuna-13b-1.1-q4_1.bin, with Ubuntu 22.04 with lxqt.
— Reply to this email directly, view it on GitHub https://github.com/nomic-ai/gpt4all-chat/issues/74#issuecomment-1521827583, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADJ4TBUGFHZ2OYI4NCJVKV3XC7JG7ANCNFSM6AAAAAAXEWXAPM . You are receiving this because you commented.Message ID: @.***>
Even on a M1 and M2 (pro too) is very very slow and really makes it unusable after a while, to the point where its not really worth using.