Ben
Ben
also try other colabs if the same issue happens
For the K80 ? 287 KB looks a bit small, it should be at least 19mb, maybe it didn't compile well. try compiling it with google colab if you get...
GPUs unsupported by flash attention don't produce a `_C_flashattention.so` after compiling, but they still benefit from a speed increase
The C/C++/Cuda code responsible for the xformers-specific operations (memory efficient attention included) for the underlying machine (python version, cuda, ..)
Sorry, I completely forgot about it, I'll add it as soon as I'm done with the new Dreambooth method
> @TheLastBen Would you be able to make the whl using that file and add it? I'd make the whl and do a new PR, but it's not working for...
Dreambooth colab ?
is your GPU A100 ? run `!nvidia-smi` to check
yep it's the A100, i'll fix it soon, try getting another colab GPU in the meantime
Yes, the T4 should be enough