stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

Add batched learning of textual inversion

Open matthewdm0816 opened this issue 3 years ago • 5 comments
trafficstars

Added batched learning to textual inversion tab. On large vram cards, batch size larger than 1 may help speed up the training simply cache entries until batch size number of entries collected and go training for one step

matthewdm0816 avatar Oct 14 '22 22:10 matthewdm0816

As issue https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1862 suggest

matthewdm0816 avatar Oct 14 '22 22:10 matthewdm0816

I was already working on this for both TI and hypernets; my version is not merged it. Also looking at code I think you're increasing step by batch count every time, which is not how it's supposed to work as far as I know.

AUTOMATIC1111 avatar Oct 15 '22 07:10 AUTOMATIC1111

As i tried this batched implementation, it seems large batch size (e.g. 8) generally prohibits learning textual inversion. either with original learning rate(0.005, working when batch size = 1) or scaled by batch size(0.005 * batch size) as mentioned in the original paper. batch size down to 2 is fine with decent acceleration.

matthewdm0816 avatar Oct 15 '22 11:10 matthewdm0816

I was already working on this for both TI and hypernets; my version is not merged it. Also looking at code I think you're increasing step by batch count every time, which is not how it's supposed to work as far as I know.

embedding.step seems nothing to do with training procedure and just a counter?

matthewdm0816 avatar Oct 15 '22 11:10 matthewdm0816