stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: Textual inversion no longer works in colab
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
When I try starting training in colab, this error happens:
Preparing dataset...
100% 14/14 [00:02<00:00, 6.28it/s]
0% 0/5000 [00:00<?, ?it/s]Traceback (most recent call last):
File "/content/stable-diffusion-webui/modules/hypernetworks/hypernetwork.py", line 509, in train_hypernetwork
loss = shared.sd_model(x, c)[0] / gradient_step
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 846, in forward
return self.p_losses(x, c, t, *args, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 903, in p_losses
logvar_t = self.logvar[t].to(self.device)
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
Yesterday it didn't happen.
Steps to reproduce the problem
- Start webui
- Go to learning tab
- Create a hypernetwork or embedding
- Set up parameters for it
- Start training
- Observe breakage
What should have happened?
It should have started training
Commit where the problem happens
44c46f0
What platforms do you use to access UI ?
Other/Cloud
What browsers do you use to access the UI ?
Mozilla Firefox
Command Line Arguments
No response
Additional information, context and logs
Same error, different context: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/3958 . I tried thi https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/3958#issuecomment-1296081445 fix, it didn't work.
same here
Same problem since today. I tired #3958 (comment) and it works for me.
I've added self.logvar = self.logvar.to(self.device)
in ddpm.py above line 903 before starting anything.
Maybe one can patch it in the colab itself?
I've added
self.logvar = self.logvar.to(self.device)
in ddpm.py above line 903 before starting anything.
Here is script that does that in colab:
!sed -i '/logvar_t = self.*/i \ \ \ \ \ \ \ \ self.logvar = self.logvar.to(self.device)' /content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py