local_llama icon indicating copy to clipboard operation
local_llama copied to clipboard

problem with ollama

Open odevroed opened this issue 2 years ago • 3 comments

Hi,

First, this is a great project. I love it!

I tried to run the v3 as I installed a few LLMs with ollama (which works fine). But I keep hitting this error: ValueError: The number of documents in the SQL database (229) doesn't match the number of embeddings in FAISS (0). Make sure your FAISS configuration file points to the same database that you used when you saved the original index.

This happens when I ask any question. It does not change if I upload a document or not. Both give the same error. I checked and ollama is running on port 11434 (the default)

For info, I'm on fedora with Python 3.10.13 in a venv.

odevroed avatar Feb 23 '24 14:02 odevroed

@odevroed Thank you for your kind words I am glad you are enjoying it! Also that is my fault, I need to push a fix that deletes the existing indexes each time you run the program again. If you delete both .db files as well as the FAISS config files/json it will work again in the meantime.

jlonge4 avatar Feb 23 '24 22:02 jlonge4

This works perfectly! Thanks a lot.

I wonder, will you implement a more straightforward way to change the model than to change it in the code? Furthermore, I tried with gemma, and the results are not good. Which types of model will work with your v3?

odevroed avatar Feb 26 '24 13:02 odevroed

@odevroed That is also on my list haha, the plan is to include a drop down to allow selection of whichever model, dynamically swapping the prompt as well to fit each model for the same task.

jlonge4 avatar Feb 26 '24 13:02 jlonge4