Farquaad56

Results 1 issues of Farquaad56

Hi, i have this issue when i load llama-7B on my RTX2080Ti `(textgen) X:\LLama chat\text-generation-webui>python server.py --model llama-7b --load-in-8bit Loading llama-7b... ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please...