localGPT icon indicating copy to clipboard operation
localGPT copied to clipboard

Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.

Results 266 localGPT issues
Sort by recently updated
recently updated
newest added

I am using CPU for execution. I am able to run `python ingest.py --device_type cpu`, but when I am using `python run_localGPT.py --device_type cpu` for executing chat bot in command...

pip install -r requirements needs to remove 1. auto-gptq ,( this needs to be compiled from source) 2. auto awq and add 3. cmake (for running onnxruntime)

Hi, i'm currently playing around with german language and documents and used the multilingual embedding models quite successfully. However when running LLama 2-Chat-7B i will always get answers in english...

![1713161070951](https://github.com/PromtEngineer/localGPT/assets/113756481/78003584-eb3c-4934-afcf-69616526ddc4) And here is the webui: ![1713161110060](https://github.com/PromtEngineer/localGPT/assets/113756481/8841c5f3-6041-49cc-a002-e433e4dd7bfd) how can I fix this bug?

Having 32 GB of GPU and 64GB of ram intel 17 13th gen processor its taking 2-4 min to response and not using GPU using llama-cpp-python==0.1.83 --no-cache-dir ![image](https://github.com/PromtEngineer/localGPT/assets/98652405/bf7d7071-cabf-4e7b-8dc6-7168c93f963a) ![image](https://github.com/PromtEngineer/localGPT/assets/98652405/03fa4171-4f78-449d-82b5-b5666c302cab) what...

Hi, I have a problem with the program which keeps re-downloading the model for every new session. Does anyone knows a fix for this? (ps. I'm not a programmer please...

Dear Team, Please let me know how we can run localGPT by using intel iRIS GPU.

```shell 2023-08-20 14:20:27,502 - INFO - run_localGPT.py:180 - Running on: cuda 2023-08-20 14:20:27,502 - INFO - run_localGPT.py:181 - Display Source Documents set to: True 2023-08-20 14:20:27,690 - INFO - SentenceTransformer.py:66...

Can we please support the [Qwen-7b-chat ](https://huggingface.co/Qwen/Qwen-7B-Chat)as one of the models using 4bit/8bit quantisation of the original models? Currently when I pass a query to localGPT, it returns be a...