stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Help]: How to Textual Inversion Embeddings on CPU with a cuda enabled machine?
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
Textual Inversion Error, Assst please
Steps to reproduce the problem
- Go to .... Training
- Press .... Train
- ... Error
What should have happened?
Well it should begin.
Commit where the problem happens
Textual Inversion/Train
What platforms do you use to access the UI ?
Windows
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
--use-cpu --precision full --no-half
List of extensions
N/A
Console logs
raceback (most recent call last):
File "C:\Users\moham\stable-diffusion-webui-master\modules\textual_inversion\textual_inversion.py", line 503, in train_embedding
scaler.scale(loss).backward()
File "C:\Users\moham\stable-diffusion-webui-master\venv\lib\site-packages\torch\cuda\amp\grad_scaler.py", line 164, in scale
assert outputs.is_cuda or outputs.device.type == 'xla'
AssertionError
Additional information
Help please, I used to be able to do this on my cpu on my other laptop..... but it didnt actually use my GPU i guess cause it was an AMD and it was 4 GB.....
hi @Annuakin, I am able to train both textual inversion and LoRA's (unet only) in 4gb, using https://github.com/kohya-ss/sd-scripts
I suggest you use this repo instead, as it has 8bit training options. It should work with Ampere series gpus.
I am able to train both textual inversion and LoRA's (unet only) in 4gb, using https://github.com/kohya-ss/sd-scripts
does it actually work on 4 Gb VRAM, or you mean CPU? how long does it take?
yes, ~25 min. it depends if you can use bitsandbytes and xformers well on your machine.
Closing as stale.