Not enough RAM
I am running out of RAM when I run this code. I tried Google Colab T4 and V100. 16GB of RAM.
I also tried using both of these VAEs vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16)
pipe = StableDiffusionXLPipeline.from_pretrained( base_model_path, torch_dtype=torch.float16, add_watermarker=False, vae=vae, )
Any suggestions on how to run using less RAM?
Here are some general suggestions, not every method works under our testing. But pipe.enable_vae_tiling() does reduce memory consumption by about 3GB.
16G VRAM is fine for generation on SDXL pipeline, check my notebook, run it with a V100 HIRAM type.
We have added an experimental distributed inference feature from diffusers.