stable-diffusion-webui-forge icon indicating copy to clipboard operation
stable-diffusion-webui-forge copied to clipboard

controlnet `Batch` workflow needs to be efficient

Open westNeighbor opened this issue 1 year ago • 2 comments

The controlnet Batch method is set by Batch size in Forge while the official A1111 use Batch count to do it. The Forge way will accumulate the vram, it can't handle a lot of Batch images, e.g., for my 16GB RTX 4080, it can do ~30 controlnet Batch images, but no way to handle 150 Batch images. The official A1111 handle it without any problems. I think the Forge should follow the official A1111 way to do it.

westNeighbor avatar Aug 31 '24 22:08 westNeighbor

If you don't use preprocessor (set it None, I have preprocessed openpose images in my batch folder), it can save a lot of vram, but it can't go even 20 Batch images, it gives errors as follows:

Traceback (most recent call last):
  File "D:\webui_forge_cu121_torch21\webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable

westNeighbor avatar Aug 31 '24 22:08 westNeighbor

I am also encountering the same problem “Batch size” base is useless For example, if you set 100 "Open Pose" images in the Controlnet Batch Folder and try to perform batch processing, Memory Overflow will occur. A1111 can be used as above. My understanding is that "Batch Size" is parallel processing and "Batch Count" is sequential processing.

j238 avatar Oct 06 '24 00:10 j238