Loading local QPTQ LLM from safetensors
I'm trying to load TheBloke/Llama-2-7b-Chat-GPTQ from the local directory with the sample code provided here:
from ctransformers import AutoModelForCausalLM
llm = AutoModelForCausalLM.from_pretrained("./my_folder/TheBloke/Llama-2-7B-GPTQ", model_type="gptq")
but it seems like .from_pretrained is looking for .bin model rather than .safetensors, thus it is unable to load , model with an error like: "No model file found in directory". I tried to pass use_safetensors but as I saw from the definition of a method, it doesn't have this parameter.
Can somebody guide me on what should I do to be able to load GPTQ LLMs from local path?
Please post the full error message with stack trace and make sure you are using the latest version:
pip install ctransformers --upgrade
Here is the error:
and the content of model folder:
ctransformers version: 0.2.17, installed with [gptq]