Alpaca
Alpaca copied to clipboard
Add an option to allow title generation using the model used in chat
Is your feature request related to a problem? Please describe. When running bigger models (at the limits of my hardware) that are usually not the same as the "Default title model" selected in the preferences of the ollama instance, there is a huge lag and sometimes crash due to having to load two models (big one used in chat, and small one for the title)
Describe the solution you'd like Perhaps having an option to use the already loaded model being used in chat to generate the title, would be more efficient in terms of resources?