Jim Zieleman

Results 17 comments of Jim Zieleman

> godbless you this worked > > nothing worked until i ran this CMAKE_ARGS="-DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.48 > > (from [imartinez/privateGPT#120](https://github.com/imartinez/privateGPT/pull/120)) > > Thank you @itgoldman . This worked...

What model is selected in your run_localGPT.py? If the following was seletected model_id = "TheBloke/vicuna-7B-1.1-HF" change it to one of the following for now: model_id = "TheBloke/Wizard-Vicuna-7B-Uncensored-HF" model_id = "TheBloke/guanaco-7B-HF"

change the model to an HF model detailed in run_localGPT.py main()

comment out model_id = "TheBloke/WizardLM-7B-uncensored-GPTQ" model_basename = "WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors" llm = load_model(device_type, model_id=model_id, model_basename = model_basename) uncomment model_id = "TheBloke/Wizard-Vicuna-7B-Uncensored-HF" llm = load_model(device_type, model_id=model_id)

I am awaiting this update to be pushed out, and then I will rebuild the API and UI around the new architecture. Should take < 24 hrs to get it...

Try it the driver and then see if nvidia-smi returns 11.8. I use ubuntu so idk tbh.

yeah autoGPTQ is very very very picky. I was trying to run this on my windows (and WSL) set up for a while and just gave up and went back...

TBH its so confusing why windows is offered such a limited history of drivers, while linux u can go back over a year in driver history.