gpt4all icon indicating copy to clipboard operation
gpt4all copied to clipboard

GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.

Results 1079 gpt4all issues
Sort by recently updated
recently updated
newest added
trafficstars

### Feature Request Not much a feature, but the download size is enormous and this is multiplied every time there is a version update. I don't have an NVIDIA GPU....

enhancement
chat
installer

CLion uses a `cmake-build-` prefix unlike Qt Creator ## Describe your changes ## Issue ticket number and link ## Checklist before requesting a review - [x] I have performed a...

### Bug Report If the UI crashes while using it, all prompts and chats generated since starting the UI are lost when the UI is restarted. This is still an...

chat

For some type of ollama/vLLM backend. Based on #2781

The pull request for the updated SDK has been merged but only the MacOS `.whl` has been added. See [here](https://pypi.org/project/gpt4all/#files). I'll try compiling from source for my own needs but...

bug
bindings
python-bindings
circleci

I tried to run in CPU but getting cuda error ### Bug Report ``` import gpt4all 3 llma_8b = gpt4all.GPT4All(model_name="Meta-Llama-3-8B-Instruct.Q4_0.gguf", 4 model_path="/repository/models/mohammad/llm_models/rag/", 5 device="cpu", 6 allow_download=True) ``` Running this in...

bindings
python-bindings
bug-unconfirmed

### Bug Report When going into Settings -> Model (any model will do), changing the "GPU Layers" setting is impossible for numbers larger than the default. It always reverts back...

chat
bug-unconfirmed

### Describe your changes Adds model support for [Gemma-2-9b-it](https://huggingface.co/GPT4All-Community/gemma-2-9b-it-GGUF) ### Description of Model At the date of writing, the model has strong results in benchmarks (for its parameter size). It...

models
models.json