gpt4all
gpt4all copied to clipboard
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
Sorry I don't know much about what is going on. I use the backend to integrate into my app to do AI locally on my computer. Is this going away?...
Currently, the GPT4All API server allows interaction with models loaded within the GPT4All application . However, it would be beneficial to extend the API to allow interaction with a local...
### Feature Request Add an optional text-to-speech (TTS) toggle switch to the GPT4All front-end, allowing users to enable TTS on LLM outputs. This feature would utilize the Dia TTS model,...
Hello! I am using the gpt4all bindings for an application of mine and I do enjoy using the ```phi3-mini-4k-instruct``` model. However, I have been looking at the newest Phi 3.5,...
Hey guys, thanks for the efforts in creating and maintaining this project. Please support symlinks on Windows if possible. As soon as I download a model within your app interface,...
### Feature Request The Nomic build of llama.cpp is outdated (e.g #3523, #3537, #3540, etc.) and due to be replaced. This FR is an alternative to the proposed switch to...
### Feature Request Hello to all I think it would be very interesting the possibility to access to Gpt4all from other clients from lan. For the moment there is just...
System Info I installed GPT4All, opened it, downloaded the Gemma3 Instruct for hugging face (tried two models https://huggingface.co/Mungert/gemma-3-12b-it-gguf https://huggingface.co/ggml-org/gemma-3-1b-it-GGUF Encountered an error loading model: "Unsupported model architecture gemma3" Model loading...
Hello folks. I got this error when modifying some PDFs inside my folders. It would be nice if the program can detect when a PDF is being modified or overwritten...