LLDiffusion
LLDiffusion copied to clipboard
Computational cost and test settings
Hi there, Thank you for sharing your impressive work. I attempted to run your test scripts on lolv2-real using the default settings(batch-size=512 and grid_r=16) on a single 4090 GPU. However, it appears that the model allocates nearly 24GB of GPU memory, making it almost impossible to run. How do these parameters impact image quality and computational cost, and what is the recommended configuration for a 4090 GPU? Thanks.