gpt4all icon indicating copy to clipboard operation
gpt4all copied to clipboard

GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.

Results 1079 gpt4all issues
Sort by recently updated
recently updated
newest added
trafficstars

### Feature Request Hi team, I’d like to ask whether it’s possible to use BGE-M3 as the text embedding model. After testing it, I found that BGE-M3 provides better context...

enhancement

I INSTALLED IT IT SAID THAT MY CPU ISNT GOOD ENOUGH. THEN I TRIED TO UNINSTALL IT AND WONT UNINSTALL. WHAT THE HELL. THIS SHOULD WORK ON MY LAPTOP.

### Bug Report dell rtx5090 24G. 2T harddisk 32G memory I downloaded the latest GPT4All, installed some local offline large model(gguf llm), and started analyzing, embedding the slices of the...

chat
bug-unconfirmed

Add the same setting as in ALpaca. When the setting is enabled, the time of the message request will be displayed in the chat. The time at which the message...

enhancement

### Bug Report It is not possible to remove a LocalDocs collection while the indexing/embedding is running. If one added a large collection that would take a long time to...

bug-unconfirmed

Use Case: I have lots of GGUF models downloaded for use in LMStudio and other programs. I would like to have GPT4All access that folder rather than downloading the models...

chat
bug-unconfirmed

### Feature Request see above

enhancement

> only _Originally posted by @sanch274F in [b666d16](https://github.com/nomic-ai/gpt4all/commit/b666d16db5aeab8b91aaf7963adcee9c643734d7#r165254576)_

Is it possible to compile it without having a GPU? I am on Debian 13 swaywm/wayland. If it is possible to compile then what are the steps?