text-generation-webui icon indicating copy to clipboard operation
text-generation-webui copied to clipboard

GPTQ with --cpu: RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'

Open dblacknc opened this issue 1 year ago • 1 comments

The vicuna-13b-int4 model is running very well on my RTX 3060. Out of curiosity I added --cpu to try and run there for performance comparison. On the first prompt, a traceback and error were printed. I had been using --sdp-attention and saw mention of it in the first traceback, then removed it and saw the below. I'm assuming this is just a missing feature because of the error, though guess it could be unintended and a bug.

OS: Ubuntu 22.04

Traceback (most recent call last): File "/root/text-generation-webui/modules/callbacks.py", line 66, in gentask ret = self.mfunc(callback=_callback, **self.kwargs) File "/root/text-generation-webui/modules/text_generation.py", line 245, in generate_with_callback shared.model.generate(**kwargs) File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/generation/utils.py", line 1485, in generate return self.sample( File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/generation/utils.py", line 2524, in sample outputs = self( File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 687, in forward outputs = self.model( File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 577, in forward layer_outputs = decoder_layer( File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 292, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 196, in forward query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) File "/root/miniconda3/envs/textgen/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in call_impl return forward_call(*args, **kwargs) File "/root/text-generation-webui/repositories/GPTQ-for-LLaMa/quant.py", line 374, in forward x = torch.matmul(x, weights.to(x.dtype)) RuntimeError: "addmm_impl_cpu" not implemented for 'Half'

dblacknc avatar Apr 12 '23 16:04 dblacknc

To my understanding gpu models do not run on cpu only. You must use a specific cpu model usually starting with ggml

franklin050187 avatar Apr 16 '23 13:04 franklin050187

This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.

github-actions[bot] avatar Sep 06 '23 23:09 github-actions[bot]