Jared Van Bortel

Results 95 issues of Jared Van Bortel

### Feature Request Several times I have heard that users have found system prompts that improve the accuracy of LocalDocs retrieval. e.g., https://github.com/nomic-ai/gpt4all/discussions/1766#discussioncomment-8771690 Using a given system prompt across multiple...

enhancement

### Discussed in https://github.com/nomic-ai/gpt4all/discussions/2064 Originally posted by **SINAPSA-IC** March 2, 2024 Hello. Assuming that there's an update for an LLM that we use in GPT4All, how can we know this?...

enhancement
chat

### Discussed in https://github.com/nomic-ai/gpt4all/discussions/2115 Originally posted by **TerrificTerry** March 13, 2024 I'm currently trying out the Mistra OpenOrca model, but it only runs on CPU with 6-7 tokens/sec. My laptop...

backend
chat
need-info
vulkan
bug-unconfirmed

We could expose llama.cpp's progress_callback to provide a way to both report progress and cancel model loading via the bindings. ref #1934

backend
bindings

A seemingly random set of files (including many JSON files and C source files) are set executable. I run Linux and use zsh, but given an environment with a fairly...

All tests from [nst/JSONTestSuite](https://github.com/nst/JSONTestSuite) with a single value (string, number, boolean, or null) at the root fail with an error: ``` document root must be object or array ``` One...

Fixes "WARNING: Request to generate sync embeddings for non-local model invalid" when using Nomic Embed.

user_data is obviously meant for the programmer to put arbitrary data in for later use. At runtime it may be any type. However, user_data is typed as None throughout these...

Here is a simple example: ```python from gi.repository import GLib parser = GLib.option.OptionParser(option_list=[ GLib.option.make_option('--flag', action='store_true', help='flag'), ]) parser.parse_args() print('flag is', parser.values.flag) ``` mypy complains about type errors: ``` $ mypy...

This PR adds opt-in CUDA support in the GPT4All UI and python bindings using the llama.cpp CUDA backend. CUDA-enabled devices will appear as e.g. "CUDA: Tesla P40" on supported platforms,...