gpt4all
gpt4all copied to clipboard
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
### Feature Request Not much a feature, but the download size is enormous and this is multiplied every time there is a version update. I don't have an NVIDIA GPU....
CLion uses a `cmake-build-` prefix unlike Qt Creator ## Describe your changes ## Issue ticket number and link ## Checklist before requesting a review - [x] I have performed a...
### Bug Report If the UI crashes while using it, all prompts and chats generated since starting the UI are lost when the UI is restarted. This is still an...
For some type of ollama/vLLM backend. Based on #2781
The pull request for the updated SDK has been merged but only the MacOS `.whl` has been added. See [here](https://pypi.org/project/gpt4all/#files). I'll try compiling from source for my own needs but...
I tried to run in CPU but getting cuda error ### Bug Report ``` import gpt4all 3 llma_8b = gpt4all.GPT4All(model_name="Meta-Llama-3-8B-Instruct.Q4_0.gguf", 4 model_path="/repository/models/mohammad/llm_models/rag/", 5 device="cpu", 6 allow_download=True) ``` Running this in...
### Bug Report When going into Settings -> Model (any model will do), changing the "GPU Layers" setting is impossible for numbers larger than the default. It always reverts back...
### Describe your changes Adds model support for [Gemma-2-9b-it](https://huggingface.co/GPT4All-Community/gemma-2-9b-it-GGUF) ### Description of Model At the date of writing, the model has strong results in benchmarks (for its parameter size). It...