memory_efficient_dreambooth
memory_efficient_dreambooth copied to clipboard
Will works on a 1080 TI?
Before doing anything, i will like to know.
Thx.
It's not working for me
RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 10.91 GiB total capacity; 8.79 GiB already allocated; 141.25 MiB free; 9.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
It's not working for me
RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 10.91 GiB total capacity; 8.79 GiB already allocated; 141.25 MiB free; 9.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
It works, try this one: https://github.com/smy20011/dreambooth-docker
or
https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth
@ZeroCool22 I've already tried Shivam's repo - do you use any additional memory optimization?
It seems the conda has a too-old version of the transformers. After updating transformers from git and installing a proper version of xformers it works, but it works slower than colab.