papersplease

Results 3 comments of papersplease

Currently, the most useful memory-optimized version is this one: https://github.com/AUTOMATIC1111/stable-diffusion-webui It has it's own implementation and optimized attention, so it can even run on 2GB VRAM GPUs, albeit slowly. Also...

> Ohh? 👀👀👀 With --lowvram --opt-split-attention, it fits into about 1.2GB VRAM for a 512x512 render. I think it can even run on Kepler if you fiddle with older pytorch...

> The card is older and it has a different design. Newer cards have stuff like tensor cores and half precision float support, that's why they aren't identical I assume....