stable-diffusion-webui
stable-diffusion-webui copied to clipboard
Support for TPU
Can it support Google TPU(like Google Colab)
What do you mean? TPUs favor tensorflow, everything here is pytorch.
I'm curious as well. If you get a tpu on Collab, then it's going to be slower than a rtx card of the same level?
I'm curious as well. If you get a tpu on Collab, then it's going to be slower than a rtx card of the same level?
It’s said that TPU is faster when doing inferences, but not training.
Just tested on a local RTX2060 6G vs. on a collab T4 12G
The 2060 appears to be ~25% faster when doing text2image and image2image.
Training I can't test with a 2060 lol
TPU seems to be good at generating images in parralel. It would be very nice to have such compatibility.
Is there any development on this?
Diffusers officially supports TPU, so I'm guessing it's not a complete rehaul to add it. However, since it's FLAX, I'm not sure exactly how it would be done.
There is project https://github.com/magicknight/stable-diffusion-tpu , however, it seems a bit abandoned
I searched for some information, It seems to modify launch.py and webui.py
https://blog.richliu.com/2023/03/04/5109/stable-diffusion-webui-cpu-only-on-arm64-platform/ https://huggingface.co/docs/diffusers/using-diffusers/stable_diffusion_jax_how_to
Omg. webui android? (edit: nvm) Wonder if tpu inference working on the tensorchip on pixel 6..
TPUs favor tensorflow, everything here is pytorch.
That's a misunderstanding. The "T" in TPU stands for "Tensor" not "TensorFlow". Both PyTorch and TensorFlow can use TPU under the hood. Look at https://colab.research.google.com/github/pytorch/xla/blob/master/contrib/colab/getting-started.ipynb
Except original SD, there is also Diffusers edition, which can work on TPU: https://huggingface.co/blog/stable_diffusion_jax
I can get it to run on TPU VM but it's very slow.
Can it support Google TPU(like Google Colab)
i looked into the source code it looks like it would take a massive effort to support TPU. First we need custom versions of torch, torch_xla, torchvision, and then we need to modify stable diffusion itself when calling torch APIs. TPU currently do not support all the APIs used in stable diffusion meaning we need to debug each single API.
How would we even go about the memory, the coral TPU's don't even have one to begin with. However, it would be really cool if when there were support.
I can get it to run on TPU VM but it's very slow. Can you share the code, how you would be able to get this running?
Was it slow because fo the low performance of the tpu or because the tpu wasn't use and the script runs on cpu?
i looked into the source code it looks like it would take a massive effort to support TPU. First we need custom versions of torch, >torch_xla, torchvision, and then we need to modify stable diffusion itself when calling torch APIs. TPU currently do not support >all the APIs used in stable diffusion meaning we need to debug each single API.
TPUs are currently the only way to give users usable access to tools like automatic1111, who are unable to upgrade a GPU. This applies to all laptops that do not have a dedicated GPU, for example. TPUs support would significantly increase the userbase.
Isn't it possible to use something like that with automatic1111: https://huggingface.co/blog/stable_diffusion_jax ??
It's definitely possible. Here's an example of someone getting SDXL running on Google's TPU v5e https://huggingface.co/blog/sdxl_jax