quivr icon indicating copy to clipboard operation
quivr copied to clipboard

Open-source RAG Framework for building GenAI Second Brains 🧠 Build productivity assistant (RAG) ⚡️🤖 Chat with your docs (PDF, CSV, ...) & apps using Langchain, GPT 3.5 / 4 turbo, Private, Anthrop...

Results 721 quivr issues
Sort by recently updated
recently updated
newest added

This is the information of the visited page When deploying the project, replace the startup command in the docker file in the front-end project with yarn start ![image](https://github.com/StanGirard/quivr/assets/135188368/fcd4a9e8-fadb-40d1-a346-eb996a6e44ff) what is...

It seems to be able to fix the issue from issue #141 for me when I added qa_prompt and condense_question_prompt to ConversationalRetrievalChain.from_llm instead of overwriting the built-in prompt of the...

I have installed Supabase on a linux installation, (using their Docker installer) and have also added Quivr on the same system, I've come to find that Quivr and Supabase use...

# Description This commit adds an additional sanitization step to remove non-escaped unicode null characters from `page_content` before creating a `Document` object. This avoids a previously fatal parsing error of...

- It would be great to have reminder, notification & SRS algorithm builtin.

# Description 1/ Adding new tables for multiple brains: - brains - brains X users - brains X vectors 2/ Adding new controllers for endpoint /brain: - Get one brain...

# Description Please include a summary of the changes and the related issue. Please also include relevant motivation and context. ## Checklist before requesting a review Please delete options that...

After opening the chat bar with the chat history, if you go to another page and come back it doesn't save the state if it was open or closed. Meaning...

good first issue
frontend

Looks like everything installed and started without errors. I have passed all the env variables in the end files. I'm getting this output Nothing on local host, as if the...

We want to implement response streaming like you see in chatgpt. They data should be displayed on the FE when it is returned by the LLM.

user story