quivr icon indicating copy to clipboard operation
quivr copied to clipboard

[Bug]: only llama2 allowed locally

Open sumitsodhi88 opened this issue 1 year ago • 5 comments

What happened?

only llama2 is supported which is not a good model compared to mistral/ mixtral. ollama can run both of these models.

Relevant log output

No response

Twitter / LinkedIn details

No response

sumitsodhi88 avatar Jan 06 '24 05:01 sumitsodhi88

🤖

Hello @sumitsodhi88! I'm Dosu, a friendly bot here to assist you while we wait for a human maintainer. I'm equipped to help with bug solving, answering questions, and guiding contributors. Feel free to ask me anything you'd like about the Quivr repository. Let's work on this together!

Based on the documentation provided in the repository, Quivr does support other models like MistralAI through Ollama, not just Llama2. You can add other models to Quivr by adding them to the models column in the user_settings table in Supabase. Here's the relevant JSON:

[
  "ollama/llama2",
  "ollama/mistral",
]

This JSON adds both the Llama2 and Mistral models to Quivr. The command ollama run llama2 is just an example of how to run a model using Ollama. It doesn't mean that Quivr only supports Llama2. You can find more information in the ollama.mdx document.

As for the specific reason why only llama2 is supported locally in the current version of Quivr, I wasn't able to find an answer within the repository. It's possible that there may be a technical reason behind this decision, but without further information, it's difficult to provide a definitive answer. If you have more details or context, it would be helpful in providing a more accurate response.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] avatar Jan 06 '24 05:01 dosubot[bot]

no, it only tries to load mistral and then changes model abck to llama2. each question require reloading the model.

sumitsodhi88 avatar Jan 06 '24 05:01 sumitsodhi88

17045206241513548769999837237516

sumitsodhi88 avatar Jan 06 '24 05:01 sumitsodhi88

The embedding is only supported by llama2. However the questions are answered with any models you want.

StanGirard avatar Jan 07 '24 21:01 StanGirard

The embedding is only supported by llama2. However the questions are answered with any models you want.

Can you do an installation tut for ollama??

enriqueramosg avatar Jan 18 '24 11:01 enriqueramosg

Thanks for your contributions, we'll be closing this issue as it has gone stale. Feel free to reopen if you'd like to continue the discussion.

github-actions[bot] avatar Apr 17 '24 12:04 github-actions[bot]