fast-stable-diffusion
fast-stable-diffusion copied to clipboard
Support of other kinds of GPU
Hi Ben, I would like to know if there is any method that can support the training on RTX 3090/ A6000. Thanks a lot.
Hi, Colab doesn't offer those GPUs as far as I know
Yeah, I know. I download the notebooks and modified the corresponding part to run on my local GPU to accelerate the training. What corresponding changes should I make if my local GPUs are RTX3090/A6000?
`if (gpu=='T4'): %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/T4/xformers-0.0.13.dev0-py3-none-any.whl
elif (gpu=='P100'): %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/P100/xformers-0.0.13.dev0-py3-none-any.whl
elif (gpu=='V100'): %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/V100/xformers-0.0.13.dev0-py3-none-any.whl
elif (gpu=='A100'): %cd /usr/local/lib/python3.7/diffusers/models/ !rm /usr/local/lib/python3.7/diffusers/models/attention.py wget.download('https://raw.githubusercontent.com/huggingface/diffusers/269109dbfbbdbe2800535239b881e96e1828a0ef/src/diffusers/models/attention.py') %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/A100/xformers-0.0.13.dev0-py3-none-any.whl `
If you get this working, let us know!
you don't need that cell for local, try adapting this : https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/Local_fast_DreamBooth-Win.ipynb
Hi Ben. Thanks a lot! I will try it! :)
Yeah, I know. I download the notebooks and modified the corresponding part to run on my local GPU to accelerate the training. What corresponding changes should I make if my local GPUs are RTX3090/A6000?
`if (gpu=='T4'): %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/T4/xformers-0.0.13.dev0-py3-none-any.whl
elif (gpu=='P100'): %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/P100/xformers-0.0.13.dev0-py3-none-any.whl
elif (gpu=='V100'): %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/V100/xformers-0.0.13.dev0-py3-none-any.whl
elif (gpu=='A100'): %cd /usr/local/lib/python3.7/diffusers/models/ !rm /usr/local/lib/python3.7/diffusers/models/attention.py wget.download('https://raw.githubusercontent.com/huggingface/diffusers/269109dbfbbdbe2800535239b881e96e1828a0ef/src/diffusers/models/attention.py') %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/A100/xformers-0.0.13.dev0-py3-none-any.whl `
You will want to grab the compiler off the 111 ui bottom cell. It's gonna take about 50 mins but you should be a ble to pip install (location of whl) at that point, compile 2 put them in diffrent folders then you could rewrite the code with install directories from there.
Or be simple and write 1 line for each and hash out what you are not using
Any way to make this work with the Tensor TPUs offered on other notebooks?
Any way to make this work with the Tensor TPUs offered on other notebooks?
Last I checked there's is a flax/Jax version out but you miss out on a ll the speed features so it kinda levels out (as far as training)
@XixuHu You may also like to check out this issue, where I was exploring getting things running on an RTX 3090 on RunPod:
- https://github.com/TheLastBen/fast-stable-diffusion/issues/80#issuecomment-1305002261