langchain-ask-pdf-local
langchain-ask-pdf-local copied to clipboard
An AI-app that allows you to upload a PDF and ask questions about it. It uses StableVicuna 13B and runs locally.
Ask Your PDF, locally
![]() |
|---|
| Answering question about 2303.12712 paper 7mb pdf file |
This is an attempt to recreate Alejandro AO's langchain-ask-pdf (also check out his tutorial on YT) using open source models running locally.
It uses all-MiniLM-L6-v2 instead of OpenAI Embeddings, and StableVicuna-13B instead of OpenAI models.
It runs on the CPU, is impractically slow and was created more as an experiment, but I am still fairly happy with the results.
Requirements
GPU is not used and is not required.
You can squeeze it into 16 GB of RAM, but I recommend 24 GB or more.
Installation
-
Install requirements (preferably to
venv):pip install -r requirements.txt -
Download
stable-vicuna-13B.ggml.q4_2.binfrom TheBloke/stable-vicuna-13B-GGML and place it in project folder.
Usage
Run streamlit run .\app.py
This should launch the UI in your default browser. Select a PDF file, send the question, wait patiently.
