quivr
quivr copied to clipboard
[Bug]: only llama2 allowed locally
What happened?
only llama2 is supported which is not a good model compared to mistral/ mixtral. ollama can run both of these models.
Relevant log output
No response
Twitter / LinkedIn details
No response
🤖
Hello @sumitsodhi88! I'm Dosu, a friendly bot here to assist you while we wait for a human maintainer. I'm equipped to help with bug solving, answering questions, and guiding contributors. Feel free to ask me anything you'd like about the Quivr repository. Let's work on this together!
Based on the documentation provided in the repository, Quivr does support other models like MistralAI through Ollama, not just Llama2. You can add other models to Quivr by adding them to the models
column in the user_settings
table in Supabase. Here's the relevant JSON:
[
"ollama/llama2",
"ollama/mistral",
]
This JSON adds both the Llama2 and Mistral models to Quivr. The command ollama run llama2
is just an example of how to run a model using Ollama. It doesn't mean that Quivr only supports Llama2. You can find more information in the ollama.mdx document.
As for the specific reason why only llama2 is supported locally in the current version of Quivr, I wasn't able to find an answer within the repository. It's possible that there may be a technical reason behind this decision, but without further information, it's difficult to provide a definitive answer. If you have more details or context, it would be helpful in providing a more accurate response.
Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
no, it only tries to load mistral and then changes model abck to llama2. each question require reloading the model.
The embedding is only supported by llama2. However the questions are answered with any models you want.
The embedding is only supported by llama2. However the questions are answered with any models you want.
Can you do an installation tut for ollama??
Thanks for your contributions, we'll be closing this issue as it has gone stale. Feel free to reopen if you'd like to continue the discussion.