GPTQ-for-LLaMa
GPTQ-for-LLaMa copied to clipboard
Does not support 3bit quantization?
3-bit quant of a 65B model, encoutered following error during pack stage:
3-bit quant of a 65B model, encoutered following error during pack stage: