gpt4all icon indicating copy to clipboard operation
gpt4all copied to clipboard

GPT4All 2.4.0 client extremely slow on M2 Mac

Open michael-murphree opened this issue 2 years ago • 26 comments

I installed the default MacOS installer for the GPT4All client on new Mac with an M2 Pro chip. It takes somewhere in the neighborhood of 20 to 30 seconds to add a word, and slows down as it goes.

In one case, it got stuck in a loop repeating a word over and over, as if it couldn't tell it had already added it to the output.

Is that client for M1 Macs only?

Edit: 2023 MacBook Pro, 16 GB

michael-murphree avatar May 09 '23 15:05 michael-murphree

Same here. It'll occasionally freeze I get the circling beachball and need to force quit. 1 GB of RAM according to the activity monitor which is much less than when I was running the first GPT4all in terminal (which was like 4 GB).

On a M2 Pro with 16 GB RAM.

dustinogle avatar May 09 '23 19:05 dustinogle

Same M2 is beyond slow

logikonline avatar May 09 '23 23:05 logikonline

Same issue here. Allocated 8 threads and I'm getting a token every 4 or 5 seconds. M2 Air with 8GB RAM.

jdblackiii avatar May 10 '23 03:05 jdblackiii

Same here - On a M2 Air with 16 GB RAM. CPU runs at ~50%. Every 10 seconds a token. The whole UI is very busy as "Stop generating" takes another 20 seconds to take effect and then the application becomes non-responding.

thomasklein avatar May 10 '23 08:05 thomasklein

Running the system from the command line (launcher.sh) works better (2 to 3 seconds to start generating text, and 2 to 3 words per second), though even that gets stuck in the repeating output loops.

The command line doesn't seem able to load the same models that the GUI client can use, however. Only the "unfiltered" model worked with the command line.

Edit: Latest repo changes removed the CLI launcher script :(

michael-murphree avatar May 10 '23 13:05 michael-murphree

same here!!!!

u10if avatar May 11 '23 17:05 u10if

+1

riccardoangius avatar May 15 '23 12:05 riccardoangius

+1, any solutions?

XGuoo avatar May 15 '23 17:05 XGuoo

Same problem here. It is unable to complete the task and keeps freezing.

fatihbozdag avatar May 16 '23 19:05 fatihbozdag

same here super slow trying all models using intel i5 got 16gb ram memory 2gb ram gpu but using cpu only on linux mint desktop.

ResearchForumOnline avatar May 17 '23 13:05 ResearchForumOnline

2.4.3 continues to experience this issue.

michael-murphree avatar May 17 '23 15:05 michael-murphree

Now tried every model available via gui web chat settings etc, all are in super slow motion like they are trying to break free from ice ^_^ seriously cool, must be the settings options need tweaking but no one knows how? I will try make some time soon, and provide screenshots and a tutorial so people do not get stuck, but might be the main code or something needs fixing/updating and not the settings bit?

ResearchForumOnline avatar May 17 '23 15:05 ResearchForumOnline

how to run only the web server?

sibelius avatar May 18 '23 01:05 sibelius

how to run only the web server?

Interesting not tried the web server setting, someone needs to explain and show how their settings are screenshots etc of a working install. Probably a minor setting needs adjusting.

ResearchForumOnline avatar May 18 '23 10:05 ResearchForumOnline

Extremely slow on M2 Mac pro too here! Anyone find a solution?

jrbeluzo-usp avatar May 19 '23 13:05 jrbeluzo-usp

1.3 version is the fastest, i set cpu bit to 3 and the tokens from 4096 to 8094, still super slow but better than before.

ResearchForumOnline avatar May 19 '23 13:05 ResearchForumOnline

I am noticing now my ram is stuck on 936.5MB or 957.9MB not going over 1000mb this must be the issue the ram allocation is set to 1024MB how to increase ram settings? I set to cpu to: 3 and works the fastest uses 75% cpu but ram is stuck on 1000MB 1GB max. I have 16gb ram i5 CPU intel and a 2gb old graphics card. Working faster now after i set it to highest priority in the Linux Mint system monitor. Still very slow ram is now exactly 1GB usage which is the max settings. that i do not know where to edit for that. Tried again and using 4gb ram with a different model, still super slow but setting cpu to: 3 is the best it gets for now.

The ram uses a max of 1gb for some models and a max of 4gb for other models, how to increase ram per model?

ResearchForumOnline avatar May 19 '23 17:05 ResearchForumOnline

Same here, running on M2 Max MBP w/64GB memory - cant get responses to return just says "generating response.." tried 2 different models same result. Not even close to utilizing all resources on machine

image image

If anyone has a fix or update please post.. Is there some way to run this headless, or in a browser? Im running the mac app from the gui

ryanrozich avatar May 27 '23 14:05 ryanrozich

Same problem here :(

FrenyCS avatar May 30 '23 14:05 FrenyCS

Try upgrading to 2.4.4 and setting the number of threads to 8.

jooray avatar May 31 '23 11:05 jooray

Try upgrading to 2.4.4 and setting the number of threads to 8.

No difference at all. Indeed it is even slower.

fatihbozdag avatar May 31 '23 20:05 fatihbozdag

Same issue...stalling with all the models :-(

devtombiz avatar Jun 01 '23 21:06 devtombiz

After restarting my Mac, it works :-)

devtombiz avatar Jun 01 '23 21:06 devtombiz

2.4.4

sorry, where did you get 2.4.4 from?

FrenyCS avatar Jun 02 '23 13:06 FrenyCS

2.4.4 seems to have solved the problem. I used the Maintenance Tool to get the update. I've tried at least two of the models listed on the downloads (gpt4all-l13b-snoozy and wizard-13b-uncensored) and they seem to work with reasonable responsiveness.

Thread count set to 8.

michael-murphree avatar Jun 02 '23 14:06 michael-murphree

I see, you guys are using the client app. I have the same issue but using the python library, like in this article: https://artificialcorner.com/gpt4all-is-the-local-chatgpt-for-your-documents-and-it-is-free-df1016bc335 Any thoughts?

FrenyCS avatar Jun 02 '23 15:06 FrenyCS

M1 Mac Mini slow too. It took over a three minutes to get a few words out so I just had to force quit it. Whole machine becomes a slug.

zeki893 avatar Jun 12 '23 04:06 zeki893

I'm using 2.4.6 on an M1 Max 32GB MBP and getting pretty decent speeds (I'd say above a token / sec) with the v3-13b-hermes-q5_1 model that also seems to give fairly good answers. All settings left on default. Using LocalDocs is super slow though, takes a few minutes every time.

daaain avatar Jun 12 '23 11:06 daaain

There's Metal support now! Please reopen if still an issue.

niansa avatar Aug 11 '23 12:08 niansa

Screenshot 2023-08-24 at 14 20 52

Seems gpt4all isn't using GPU on Mac(m1, metal), and is using lots of CPU. Not sure for the latest release.

BodhiHu avatar Aug 24 '23 06:08 BodhiHu