private-gpt
private-gpt copied to clipboard
Interact with your documents using the power of GPT, 100% privately, no data leaks
Managed to get it working (ingested ~120 txt documents) and it appears to be functioning but it doesn't provide the source for the first answer. It also adds another question...
Python is **using the whole memory**, instead of passing everything under a 'MAX' limit, which makes the program crash abruptly. Here is what I get when I run the command...
**Description:** Hello, I propose that we consider replacing the traditional Python `input` function with the `prompt_toolkit` library in our command-line interface (CLI). This transition could provide a more interactive, user-friendly,...
Made changes so that "db" to store databases (_labelled under environ name **PERSIST_DIRECTORY_**) directory can be automatically created and be assigned as their environment variable as most people forget to...
How to increase the threads used in inference? I notice CPU usage in privateGPT.py running is 4 threads. I guess we can increase the number of threads to speed up...
Addresses #124. With this new functionality, you can now add new documents to an existing chroma collection.
See error here: File "privateGPT.py", line 26 match model_type: ^ SyntaxError: invalid syntax **Code is below:** import os load_dotenv() llama_embeddings_model = os.environ.get("LLAMA_EMBEDDINGS_MODEL") persist_directory = os.environ.get('PERSIST_DIRECTORY') model_type = os.environ.get('MODEL_TYPE') model_path =...
does it support Macbook m1? I downloaded the two files mentioned in the readme. running python ingest.py the tried to test it out. Got the following errors. Thanks ``` llama_print_timings:...
When following the readme, including downloading the model from the URL provided, I run into this on ingest: ``` llama.cpp: loading model from models/ggml-model-q4_0.bin error loading model: unknown (magic, version)...