text-generation-webui icon indicating copy to clipboard operation
text-generation-webui copied to clipboard

Loading facebook_opt-350m... Could not find the quantized model in .pt or .safetensors format, exiting

Open frosty1234 opened this issue 1 year ago • 2 comments

Describe the bug

After downloading facebook_opt-350m from the download-model bat file, I ran into this error while running the start-webui bat file.

Inside the start-webui bat file, I'm using this line: call python server.py --auto-devices --chat --wbits 4 --groupsize 128

After checking inside the facebook_opt-350m model folder, I couldn't find any files ending in .pt or .safetensors. The largest file was a 646MB file called pytorch_model.bin, so it's a bin file. Am I missing a step here? Or a few steps?

Is there an existing issue for this?

  • [X] I have searched the existing issues

Reproduction

After downloading facebook_opt-350m from the download-model bat file, I ran into this error while running the start-webui bat file.

Screenshot

No response

Logs

Loading facebook_opt-350m...
Could not find the quantized model in .pt or .safetensors format, exiting...
Press any key to continue . . .

System Info

Windows 11
Acer, Nitro 5
NVIDIA

frosty1234 avatar Apr 10 '23 17:04 frosty1234

Remove --wbits 4 --groupsize 128. That is only for GPTQ-quantized models. It tells webui to load a 4-bit model with a groupsize of 128.

jllllll avatar Apr 11 '23 00:04 jllllll

Thanks! This worked for me on facebook_opt-1.3b

world5am avatar Apr 14 '23 14:04 world5am

This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.

github-actions[bot] avatar Oct 02 '23 23:10 github-actions[bot]