3Simplex
3Simplex
### Bug Report GPU models in memory are slow to unload unlike CPU models in memory which unload instantly. ### Steps to Reproduce 1. Open Windows task manager, view "chat"...
### Feature Request I would love a feature to be able to share and use custom system prompts. I have been creating custom system prompts and using them with many...
### Bug Report Pre-existing collections from before the update to 2.7.4 do not work after update. Only collections created in 2.7.4 work. ### Steps to Reproduce 1. Create collection in...
### Bug Report When using Mac set to use metal, gpt-j model fails to fallback to CPU. ### Steps to Reproduce 1. With a Mac set application device to use...
## Describe your changes Menu theme colors adjusted to produce softer look while maintaining current scheme. To achieve this: Assigned property names for each color by the object to which...
After long debate I think we've settled on a simple option in localdocs that will turn off all automatic reindexing of localdocs collections. OLDER ORIGINAL REQUEST ### Feature Request The...
### Bug Report While downloading a model the view is updated causing the list to jump back to the model as it is downloading. ### Steps to Reproduce 1. Add...
### Feature Request Show the amount of context used during a chat. Some users have been curious about the status of the context window and how much of it they...
### Feature Request Remove defaults for model templates. - System Prompt - Chat Template - Tool Calling Add GUI warnings that they have to configure this in order to use...
>[!NOTE] >Until this is fixed the workaround is use the CPU or CUDA instead. ### Bug Report Vulkan: Meta-Llama-3.1-8b-128k slow generation. When using release 3.1.1 and Vulkan the Meta-Llama-3.1-8b-128k is...