gpt4all
gpt4all copied to clipboard
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
## Low resolution fonts and icons on Ubuntu 24.10  ### Steps to Reproduce Open chat app, only the chat have this bug. ### Environment - GPT4All...
### Feature Request A way to set a custom display name for a model. This could be displayed in the Choose a model drop down menu in example. Or alternatively...
As of v3.1, there is no Default Model specified by name, by name that is, anywhere on the interface. The user is only asked to select a model as the...
As of v3.1, the TAB key is accepted in the textboxes for - System Prompt - Prompt Template - Chat Name Prompt - Suggested FollowUp Prompt ("All" in the image...
### Feature Request ### Maybe this is not possible, or there is no way to do it, but I think it would be very interesting in a local model, that...
### Feature Request In GPT4All v31.0 -> Models -> Explore Models, after a search for models the results can be sorted by Likes, Downloads, Recent. ("Default" means whatever - unsorted?...
### Bug Report When attempting to load Meta-Llama-3.1-8B-Instruct-128k-Q4_0.gguf via the Python SDK, I get the below error. ``` llama_model_load: error loading model: done_getting_tensors: wrong number of tensors; expected 292, got...
Ordering before this PR (not deterministic, but this is one possibility): ``` foob fooa foo 2.0 2.0.0 (greater than 2.0) 2.0.99-rc1 2.0.99-rc10 2.0.99-rc11 2.0.99-rc2 2.0.99-rc20 2.0.99 2.1.0-rc1 2.1.0 3.0.0 3.0.0-dev0...
### Feature Request Hi, It will nice to have an input/output working directory. Example : - if a want to ask to analyze a code for improvement, I can give...
>[!NOTE] >Until this is fixed the workaround is use the CPU or CUDA instead. ### Bug Report Vulkan: Meta-Llama-3.1-8b-128k slow generation. When using release 3.1.1 and Vulkan the Meta-Llama-3.1-8b-128k is...