Stable-textual-inversion_win icon indicating copy to clipboard operation
Stable-textual-inversion_win copied to clipboard

how much vram do I need?

Open DrakeFruit opened this issue 3 years ago • 8 comments

I have 8gb of vram, and im running out of memory trying to run this on 700 images. it says I need 30 but is that a strict requirement?

DrakeFruit avatar Aug 23 '22 05:08 DrakeFruit

I'm training on 24gb vram, so 30gb vram is not a requirement atm it doesn't work for 8gb vram, maybe someone who has knowledge of optimisation will change the script a little, but for now it's not possible, maybe try the colab :)

nicolai256 avatar Aug 23 '22 13:08 nicolai256

maybe try the colab :)

there's a colab notebook for textual inversion on stable diffusion?

rifeWithKaiju avatar Aug 23 '22 14:08 rifeWithKaiju

I have 8gb of vram, and im running out of memory trying to run this on 700 images. it says I need 30 but is that a strict requirement?

I am using the basujinda repo. It let me run the 768x768 150 steps. This repo let me run on my rtx 3070 base 512x512 resolution

Pmejna avatar Aug 23 '22 18:08 Pmejna

go to v1-finetune.yaml file and change batch size to 1 , that should slove it, also change number of workers to half of whats now.

1blackbar avatar Aug 23 '22 18:08 1blackbar

File "E:\ModelTraining\ldm\modules\attention.py", line 180, in forward sim = einsum('b i d, b j d -> b i j', q, k) * self.scale RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 8.00 GiB total capacity; 6.82 GiB already allocated; 0 bytes free; 7.00 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

DrakeFruit avatar Aug 24 '22 04:08 DrakeFruit

File "E:\ModelTraining\ldm\modules\attention.py", line 180, in forward sim = einsum('b i d, b j d -> b i j', q, k) * self.scale RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 8.00 GiB total capacity; 6.82 GiB already allocated; 0 bytes free; 7.00 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

8gb might not be enough, u can use free colab to train tho

nicolai256 avatar Aug 24 '22 05:08 nicolai256

size: 448 working on 3060 12gb

hlky avatar Aug 25 '22 15:08 hlky

size: 448 working on 3060 12gb

Max memory usage and batch size?

GucciFlipFlops1917 avatar Sep 20 '22 17:09 GucciFlipFlops1917