InstantStyle icon indicating copy to clipboard operation
InstantStyle copied to clipboard

Not enough RAM

Open TonyAssi opened this issue 1 year ago • 3 comments

I am running out of RAM when I run this code. I tried Google Colab T4 and V100. 16GB of RAM.

I also tried using both of these VAEs vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16)

pipe = StableDiffusionXLPipeline.from_pretrained( base_model_path, torch_dtype=torch.float16, add_watermarker=False, vae=vae, )

Any suggestions on how to run using less RAM?

TonyAssi avatar Apr 04 '24 21:04 TonyAssi

Here are some general suggestions, not every method works under our testing. But pipe.enable_vae_tiling() does reduce memory consumption by about 3GB.

ResearcherXman avatar Apr 08 '24 09:04 ResearcherXman

16G VRAM is fine for generation on SDXL pipeline, check my notebook, run it with a V100 HIRAM type.

yi avatar Apr 10 '24 00:04 yi

We have added an experimental distributed inference feature from diffusers.

ResearcherXman avatar Apr 10 '24 19:04 ResearcherXman