fast-stable-diffusion icon indicating copy to clipboard operation
fast-stable-diffusion copied to clipboard

No such operator xformers::efficient_attention_forward_cutlass

Open lolxdmainkaisemaanlu opened this issue 2 years ago • 56 comments

Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 45, in f res = list(func(*args, **kwargs)) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 28, in f res = func(*args, **kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/txt2img.py", line 49, in txt2img processed = process_images(p) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 430, in process_images res = process_images_inner(p) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 531, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 664, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers.py", line 507, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers.py", line 422, in launch_sampling return func() File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers.py", line 507, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers.py", line 315, in forward x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]}) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/external.py", line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 1329, in forward out = self.diffusion_model(x, t, context=cc) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 776, in forward h = module(h, emb, context) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 84, in forward x = layer(x, context) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/attention.py", line 334, in forward x = block(x, context=context[i]) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_checkpoint.py", line 4, in BasicTransformerBlock_forward return checkpoint(self._forward, x, context) File "/usr/local/lib/python3.8/dist-packages/torch/utils/checkpoint.py", line 249, in checkpoint return CheckpointFunction.apply(function, preserve, *args) File "/usr/local/lib/python3.8/dist-packages/torch/utils/checkpoint.py", line 107, in forward outputs = run_function(*args) File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/attention.py", line 272, in _forward x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 227, in xformers_attention_forward out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None) File "/usr/local/lib/python3.8/dist-packages/xformers/ops/memory_efficient_attention.py", line 967, in memory_efficient_attention return op.forward_no_grad( File "/usr/local/lib/python3.8/dist-packages/xformers/ops/memory_efficient_attention.py", line 343, in forward_no_grad return cls.FORWARD_OPERATOR( File "/usr/local/lib/python3.8/dist-packages/xformers/ops/common.py", line 11, in no_such_operator raise RuntimeError( RuntimeError: No such operator xformers::efficient_attention_forward_cutlass - did you forget to build xformers with python setup.py develop?

lolxdmainkaisemaanlu avatar Dec 07 '22 18:12 lolxdmainkaisemaanlu

@TheLastBen same error

dravid-cmyk avatar Dec 07 '22 18:12 dravid-cmyk

same

iquarx avatar Dec 07 '22 18:12 iquarx

Same error. I get this when I'm using img2img.

I added the following code snippet right before the last step but it didn't fix the problem: !python /content/gdrive/MyDrive/sd/stablediffusion/setup.py develop

CJohnDesign avatar Dec 07 '22 18:12 CJohnDesign

same error

Omenizer avatar Dec 07 '22 18:12 Omenizer

Same here

Ktr4ks avatar Dec 07 '22 18:12 Ktr4ks

Dosent matter if I use Dreambooth or Fast SD, same result. It shows a warning when running COLAB, and then when I Gradio it can not process a request

SorenTruelsen avatar Dec 07 '22 18:12 SorenTruelsen

Same error

Training the text encoder... /usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE WARNING: /usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE Need to compile C++ extensions to get sparse attention support. Please run python setup.py build develop /usr/local/lib/python3.8/dist-packages/diffusers/models/attention.py:429: UserWarning: Could not enable memory efficient attention. Make sure xformers is installed correctly and a GPU is available: No such operator xformers::efficient_attention_forward_cutlass - did you forget to build xformers with python setup.py develop? warnings.warn(

noedvil10 avatar Dec 07 '22 18:12 noedvil10

Same here

Tdlreis avatar Dec 07 '22 19:12 Tdlreis

Only works on premium gpu for which xformers isn't used I think.

zuencap avatar Dec 07 '22 19:12 zuencap

Only works on premium gpu for which xformers isn't used I think.

so we won't be able to use sd on free collab anymore?

dravid-cmyk avatar Dec 07 '22 19:12 dravid-cmyk

I noticed this early in the start process:

/usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
WARNING:xformers:WARNING: /usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
Need to compile C++ extensions to get sparse attention support. Please run python setup.py build develop

CJohnDesign avatar Dec 07 '22 19:12 CJohnDesign

I tried SD1.5 to see if that fixes it but same problem when using img2img

CJohnDesign avatar Dec 07 '22 19:12 CJohnDesign

Same here, starting today

lhucklen avatar Dec 07 '22 19:12 lhucklen

It doesn't work on a premium GPU either, so that's a red herring.

Quark999 avatar Dec 07 '22 19:12 Quark999

same here. on both colab files. Dreambooth and Automatic1111

metzo007 avatar Dec 07 '22 19:12 metzo007

After using

!python setup.py build develop

I still get the same error below.

RuntimeError: No such operator xformers::efficient_attention_forward_cutlass - did you forget to build xformers with python setup.py develop?

lhucklen avatar Dec 07 '22 19:12 lhucklen

Having this issue too, I got flagged as posting a duplicate.

createperhaps avatar Dec 07 '22 20:12 createperhaps

same issue

ReEnMikki avatar Dec 07 '22 20:12 ReEnMikki

Converting to Diffusers ...
/usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
WARNING:xformers:WARNING: /usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
Need to compile C++ extensions to get sparse attention support. Please run python setup.py build develop
/usr/local/lib/python3.8/dist-packages/diffusers/models/attention.py:429: UserWarning: Could not enable memory efficient attention. Make sure xformers is installed correctly and a GPU is available: No such operator xformers::efficient_attention_forward_cutlass - did you forget to build xformers with `python setup.py develop`?
  warnings.warn(

Error on Model Download cell of fast dreambooth

ReEnMikki avatar Dec 07 '22 20:12 ReEnMikki

same issue

Iwori avatar Dec 07 '22 20:12 Iwori

Similar message for me. It seemed to occur every time I saved a checkpoint

/usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
WARNING:xformers:WARNING: /usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
Need to compile C++ extensions to get sparse attention support. Please run python setup.py build develop
/usr/local/lib/python3.8/dist-packages/diffusers/models/attention.py:429: UserWarning: Could not enable memory efficient attention. Make sure xformers is installed correctly and a GPU is available: No such operator xformers::efficient_attention_forward_cutlass - did you forget to build xformers with python setup.py develop?
  warnings.warn(
Done, resuming training ...```

iaincambeul avatar Dec 07 '22 21:12 iaincambeul

Seems to be fixed: https://github.com/TheLastBen/fast-stable-diffusion/commit/3cf2052a3d3f837801b5d77cb136aa276653a261

CJohnDesign avatar Dec 07 '22 21:12 CJohnDesign

Seems to be fixed: 3cf2052

Just tried running it a few minutes ago and I got the same error.

ammar1y avatar Dec 07 '22 21:12 ammar1y

fixed (for the T4 at least), re-run the requirements cell

TheLastBen avatar Dec 07 '22 21:12 TheLastBen

https://github.com/TheLastBen/fast-stable-diffusion/issues/904#issuecomment-1341612026

Retrying now! Thanks for quick response!

HamedMP avatar Dec 07 '22 21:12 HamedMP

fixed (for the T4 at least), re-run the requirements cell

what about paid pro users??? can we access too?

LIQUIDMIND111 avatar Dec 07 '22 22:12 LIQUIDMIND111

This is fixed, @lolxdmainkaisemaanlu can we close?

CJohnDesign avatar Dec 07 '22 22:12 CJohnDesign

what about paid pro users??? can we access too?

are you getting the error with the A100 ?

TheLastBen avatar Dec 07 '22 23:12 TheLastBen

Works for me now

Omenizer avatar Dec 08 '22 00:12 Omenizer

can't get it to work with the A100. Using normal works. @TheLastBen

gregormelhorn avatar Dec 08 '22 00:12 gregormelhorn