localGPT icon indicating copy to clipboard operation
localGPT copied to clipboard

Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.

Results 266 localGPT issues
Sort by recently updated
recently updated
newest added

Hey, is it worth it to support GGML quantized models for cpu and mps? I've done some testing of TheBloke's GGML models. Most of them are supported by llama-cpp, which...

Hi @PromtEngineer , I would like to contribute to your repo by adding feature to support doc and docx file for LocalGPT. I have tested it and is working fine....

Adds argparse to `localGPTUI.py` to make it easier to specify custom ports and host (e.g., 0.0.0.0 to expose the webserver to the internet on a public machine). Defaults to 127.0.0.1:5111,...

ERROR: Could not find a version that satisfies the requirement langchain==0.0.221 (from versions: 0.0.1, 0.0.2, 0.0.3, 0.0.4, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.0.14, 0.0.15, 0.0.16, 0.0.17,...

I have set things up on an ubuntu 22.04 box as per instructions. anaconda installed, new 3.10 python env created. I get part way through *python ingest.py --device_type cpu* ```...

C:\Users\jiaojiaxing\.conda\envs\localgpt\python.exe E:\jjx\localGPT\apiceshi.py load INSTRUCTOR_Transformer max_seq_length 512 bin C:\Users\jiaojiaxing\.conda\envs\localgpt\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll Loading checkpoint shards: 100%|██████████| 3/3 [01:32

Getting the following error under Windows 11: OS Name: Microsoft Windows 11 Pro OS Version: 10.0.22631 N/A Build 22631 OS Manufacturer: Microsoft Corporation OS Configuration: Standalone Workstation OS Build Type:...

I already have running locally the ollama service and ready to use model with modelfile . I would like to use those models in constants.py , I can't find how...

Hi Please add support for llama-3 Currently the prompt template is not compatible since llama-3 uses different style. Ref: https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3 Currently as is I was unable to use the llama-3...

Support added to run_localGPT_API.py script to groq support, save_qa etc