gpt4all icon indicating copy to clipboard operation
gpt4all copied to clipboard

GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.

Results 1079 gpt4all issues
Sort by recently updated
recently updated
newest added
trafficstars

## Low resolution fonts and icons on Ubuntu 24.10 ![Screenshot from 2024-07-27 18-16-26](https://github.com/user-attachments/assets/d7070ad4-3ced-4d5d-9ad7-65ea227fbcb3) ### Steps to Reproduce Open chat app, only the chat have this bug. ### Environment - GPT4All...

chat
bug-unconfirmed
chat-ui-ux

### Feature Request A way to set a custom display name for a model. This could be displayed in the Choose a model drop down menu in example. Or alternatively...

enhancement

As of v3.1, there is no Default Model specified by name, by name that is, anywhere on the interface. The user is only asked to select a model as the...

enhancement

As of v3.1, the TAB key is accepted in the textboxes for - System Prompt - Prompt Template - Chat Name Prompt - Suggested FollowUp Prompt ("All" in the image...

enhancement

### Feature Request ### Maybe this is not possible, or there is no way to do it, but I think it would be very interesting in a local model, that...

enhancement

### Feature Request In GPT4All v31.0 -> Models -> Explore Models, after a search for models the results can be sorted by Likes, Downloads, Recent. ("Default" means whatever - unsorted?...

enhancement

### Bug Report When attempting to load Meta-Llama-3.1-8B-Instruct-128k-Q4_0.gguf via the Python SDK, I get the below error. ``` llama_model_load: error loading model: done_getting_tensors: wrong number of tensors; expected 292, got...

bug-unconfirmed

Ordering before this PR (not deterministic, but this is one possibility): ``` foob fooa foo 2.0 2.0.0 (greater than 2.0) 2.0.99-rc1 2.0.99-rc10 2.0.99-rc11 2.0.99-rc2 2.0.99-rc20 2.0.99 2.1.0-rc1 2.1.0 3.0.0 3.0.0-dev0...

### Feature Request Hi, It will nice to have an input/output working directory. Example : - if a want to ask to analyze a code for improvement, I can give...

enhancement

>[!NOTE] >Until this is fixed the workaround is use the CPU or CUDA instead. ### Bug Report Vulkan: Meta-Llama-3.1-8b-128k slow generation. When using release 3.1.1 and Vulkan the Meta-Llama-3.1-8b-128k is...

bug
chat
vulkan