stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: text2img error
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
when evaluating text2img in more than 1000x1000 type errors rtx 3070 ti.
while my friend renders quietly at 1500x1500 on rtx 3060
Steps to reproduce the problem
- Go to ....
- Press ....
- ...
What should have happened?
Commit where the problem happens
What platforms do you use to access the UI ?
No response
What browsers do you use to access the UI ?
No response
Command Line Arguments
-
List of extensions
Console logs
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.75 GiB (GPU 0; 8.00 GiB total capacity; 5.89 GiB already allocated; 0 bytes free; 6.00 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Additional information
No response
same issue as you since a couple days ago #8571 used to be able to generate 1080p on the same settings then suddenly getting OOM errors
Could you try using --opt-split-attention-v1
to see if that mitigates the issue?
Maybe this error depend on the VRAM . Check the size and the usage of your VRAM and to optimize the image size and avoid memory errors use "--medvram" : This option allocates memory in a way that optimizes the usage of your VRAM.
But if you get an out of memory error with" --medvram" ,use" --lowvram --always-batch-cond-uncond" instead: This suggests that you are getting out of memory errors when using the --medvram option ,and this option further reduces memory usage and ensures that the batch is always processed.
Or if you want to generate images larger than you can with "--medvram" relative to the size of your VRAM , you should switch to the "--lowvram " option to further reduce memory usage.
Try those one of this 3 points to see if that mitigates the issue ?
I started to get this error when I try to teach dreambooth, but I have rtx 3090 with 24 gb VRAM and it worked before
@Xyem @nhari999 how to config param ,i just know use webui.bat
@Xyem @nhari999 how to config param ,i just know use webui.bat
To config param and put them into your webUi user bat : go to webUI-user.bat file , right click and then click on edit ; the appearance will be like this in the text editor
@echo off
set PYTHON="C:\Users\MSI pulse\AppData\Local\Programs\Python\Python310\python.exe"
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=
git pull
call webui.bat
Then put the command behind the "set COMMANDLINE_ARGS=" like this "set COMMANDLINE_ARGS= --medvram" and then save and restart the stable diffusion again
@nhari999 thank you
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.64 GiB (GPU 0; 8.00 GiB total capacity; 5.34 GiB already allocated; 571.00 MiB free; 5.39 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 19.02 GiB (GPU 0; 8.00 GiB total capacity; 1.81 GiB already allocated; 4.08 GiB free; 1.87 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
return _VF.einsum(equation, operands) # type: ignore[attr-defined]
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 7.28 GiB (GPU 0; 8.00 GiB total capacity; 189.80 MiB already allocated; 5.73 GiB free; 224.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
1000х1000