localGPT
localGPT copied to clipboard
Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Hi all you guys, My situation is that when I run the code from the terminal running on the GPU, it Goes very well on the GPU and very fast....
I need assistance. I execute 'python localGPTUI.py' . I then go to the 'http://127.0.0.1:5111' where I see the html page. I enter a search prompt. I am faced with '500...
So when I run the localGPT is fine. but when I try to run_localGPT_API when I try to ask question it's always throw exceed context window error is there anyone...
How to make LocalGPT to translate everything into English language before store and process inputs
Hi, first of all, i really like this project, it's better than PrivateGPT, thank you! Secondly, I want to use LocalGPT for Slovak documents, but it's impossible because no LLM...
We would like to download the source document if we want to. How easy is it to add a download option along with the document name display? Any guidance is...
After install everything and ingest a document i tried to call the `run_localGPT.py`, but when i enter a query, i get the error described in the tittle. here's the log:...
As the title says, how to run the UI version inside a docker container? Or rather, how to use the UI with the app running in docker?
If hugging faces is down we are unable to load models that we have previously downloaded. It would be nice to have some check/safeguard for this also just in case...
I have setup Conda and all required dependencies. After that - While executing "ingest.py" I am getting below error - " C:\Users\Pallavi\AppData\Roaming\Python\Python311\site-packages\langchain\vectorstores_init_.py:35: LangChainDeprecationWarning: Importing vector stores from langchain is deprecated....
Hello everyone, I have a question: can I use "localGPT" with 100k PDFs? Because I tested different RAG tutorials and unfortunately it took more than 10 minutes to reply to...