ComfyUI-ELLA icon indicating copy to clipboard operation
ComfyUI-ELLA copied to clipboard

Error occurred when executing T5TextEncode #ELLA: (RX580 i39100f Windows11 32gb Ram)

Open KillyTheNetTerminal opened this issue 1 year ago • 6 comments

Error occurred when executing T5TextEncode #ELLA:

"addmm_impl_cpu_" not implemented for 'Half'

File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\custom_nodes\ComfyUI-ELLA\ella.py", line 228, in encode cond = text_encoder_model(text, max_length=max_length) File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\custom_nodes\ComfyUI-ELLA\model.py", line 158, in call outputs = self.model(text_input_ids, attention_mask=attention_mask) # type: ignore File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 1980, in forward encoder_outputs = self.encoder( File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 1115, in forward layer_outputs = layer_module( File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 695, in forward self_attention_outputs = self.layer[0]( File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 602, in forward attention_output = self.SelfAttention( File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 521, in forward query_states = shape(self.q(hidden_states)) # (batch_size, n_heads, seq_length, dim_per_head) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias)

KillyTheNetTerminal avatar May 07 '24 23:05 KillyTheNetTerminal

use --fp32-text-enc refer to

JettHu avatar May 08 '24 00:05 JettHu

oh my god thanks it worked!

KillyTheNetTerminal avatar May 08 '24 16:05 KillyTheNetTerminal

imagen_2024-05-08_120449476 Exactly the same workflow with the same model but this is the output, something I missing? imagen_2024-05-08_120523659

KillyTheNetTerminal avatar May 08 '24 17:05 KillyTheNetTerminal

It looks like using --fp32-text-enc affects the results. refer to

The results on my machine are similar to yours.

image

JettHu avatar May 09 '24 06:05 JettHu

It looks like using --fp32-text-enc affects the results. refer to

The results on my machine are similar to yours.

image

The effect is somewhat different on some GPU models that cannot run fp16. This may be something we need to pay attention to in the future. cc @budui

JettHu avatar May 09 '24 06:05 JettHu

there is a way to solve this? RX580 can't use fp16?

KillyTheNetTerminal avatar May 09 '24 21:05 KillyTheNetTerminal