fast-stable-diffusion icon indicating copy to clipboard operation
fast-stable-diffusion copied to clipboard

Support of other kinds of GPU

Open XixuHu opened this issue 2 years ago • 9 comments

Hi Ben, I would like to know if there is any method that can support the training on RTX 3090/ A6000. Thanks a lot.

XixuHu avatar Nov 06 '22 15:11 XixuHu

Hi, Colab doesn't offer those GPUs as far as I know

TheLastBen avatar Nov 06 '22 15:11 TheLastBen

Yeah, I know. I download the notebooks and modified the corresponding part to run on my local GPU to accelerate the training. What corresponding changes should I make if my local GPUs are RTX3090/A6000?

`if (gpu=='T4'): %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/T4/xformers-0.0.13.dev0-py3-none-any.whl

elif (gpu=='P100'): %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/P100/xformers-0.0.13.dev0-py3-none-any.whl

elif (gpu=='V100'): %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/V100/xformers-0.0.13.dev0-py3-none-any.whl

elif (gpu=='A100'): %cd /usr/local/lib/python3.7/diffusers/models/ !rm /usr/local/lib/python3.7/diffusers/models/attention.py wget.download('https://raw.githubusercontent.com/huggingface/diffusers/269109dbfbbdbe2800535239b881e96e1828a0ef/src/diffusers/models/attention.py') %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/A100/xformers-0.0.13.dev0-py3-none-any.whl `

XixuHu avatar Nov 06 '22 16:11 XixuHu

If you get this working, let us know!

CptGabok avatar Nov 06 '22 16:11 CptGabok

you don't need that cell for local, try adapting this : https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/Local_fast_DreamBooth-Win.ipynb

TheLastBen avatar Nov 06 '22 16:11 TheLastBen

Hi Ben. Thanks a lot! I will try it! :)

XixuHu avatar Nov 06 '22 16:11 XixuHu

Yeah, I know. I download the notebooks and modified the corresponding part to run on my local GPU to accelerate the training. What corresponding changes should I make if my local GPUs are RTX3090/A6000?

`if (gpu=='T4'): %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/T4/xformers-0.0.13.dev0-py3-none-any.whl

elif (gpu=='P100'): %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/P100/xformers-0.0.13.dev0-py3-none-any.whl

elif (gpu=='V100'): %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/V100/xformers-0.0.13.dev0-py3-none-any.whl

elif (gpu=='A100'): %cd /usr/local/lib/python3.7/diffusers/models/ !rm /usr/local/lib/python3.7/diffusers/models/attention.py wget.download('https://raw.githubusercontent.com/huggingface/diffusers/269109dbfbbdbe2800535239b881e96e1828a0ef/src/diffusers/models/attention.py') %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/A100/xformers-0.0.13.dev0-py3-none-any.whl `

You will want to grab the compiler off the 111 ui bottom cell. It's gonna take about 50 mins but you should be a ble to pip install (location of whl) at that point, compile 2 put them in diffrent folders then you could rewrite the code with install directories from there.

Or be simple and write 1 line for each and hash out what you are not using

nawnie avatar Nov 06 '22 18:11 nawnie

Any way to make this work with the Tensor TPUs offered on other notebooks?

Portareumbra avatar Nov 06 '22 20:11 Portareumbra

Any way to make this work with the Tensor TPUs offered on other notebooks?

Last I checked there's is a flax/Jax version out but you miss out on a ll the speed features so it kinda levels out (as far as training)

nawnie avatar Nov 07 '22 10:11 nawnie

@XixuHu You may also like to check out this issue, where I was exploring getting things running on an RTX 3090 on RunPod:

  • https://github.com/TheLastBen/fast-stable-diffusion/issues/80#issuecomment-1305002261

0xdevalias avatar Nov 09 '22 06:11 0xdevalias