localGPT
localGPT copied to clipboard
Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
"Torch not compiled with CUDA enabled" on windows 10
I tried in colab, its not giving any error but when I am trying executing run_localGPT.py, in the ouput I am getting ^c and it will get exited. Please advise.
I would like to express my appreciation for the excellent work you have done with this project. I admire your use of the Vicuna-7B model and InstructorEmbeddings to enhance performance...
It takes a lot of time to get a result

A way to not be spammed with sources and other texts that hides the answer
I've been able to run the ingest.py and it seems to work. a chroma-collections.parquet and chroma-embeddings.parquet are created in the same folder as ingest.py. When I run run_localGPT.py, it generates...
Installation smooth, no problem So i do a python ingest.py and everything is fine, but then later: load INSTRUCTOR_Transformer max_seq_length 512 Using embedded DuckDB with persistence: data will be stored...
Macbook m1 Pro, Ventura 13. Python 3.10. When trying to start ingest.py, I get this error: ``` Loading documents from /Users/artur/localGPT/SOURCE_DOCUMENTS Loaded 1 documents from /Users/artur/localGPT/SOURCE_DOCUMENTS Split into 72 chunks...
Hello, i'm trying to run it on Google Colab : * The first script `ingest.py` finishes quit fast (around 1min) * Unfortunately, the second script `run_localGPT.py` gets stuck 7min before...