Add an example Google Colab / Jupyter Notebook file
Similar to:
- https://github.com/Stability-AI/stablediffusion/issues/4
Related:
- https://github.com/Stability-AI/stablediffusion/issues/16
colab.research.google.com/drive/1cuzz-TcAXsqlCPqqa8U7LltMmeZuBYkh?usp=sharing
Should work now, make sure you check the box "redownload original model" when choosing V2
https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb
Requires more than 12GB of RAM for now, so free colab probably won't suffice.
Originally posted by @TheLastBen in https://github.com/TheLastBen/fast-stable-diffusion/issues/599#issuecomment-1326461962
@0xdevalias Hello there, I hope is it fine for you that I kind of created a "minified" version of your colab which works fine with a T4 using xformers. Sometimes later I might add other tools like Inpainting, Depth and Upscaling.
https://colab.research.google.com/drive/1wzI9nt8dYqq47I1Y-WvOAiO01y7EBoiO?usp=sharing
Here is a Colab/Gradio setup to play with the depth model:
https://github.com/backnotprop/Colab-Stable-Diffusion-2-Depth-UI/tree/main
I hope is it fine for you that I kind of created a "minified" version of your colab which works fine with a T4 using xformers.
@inflamously It's not my colab, just one I found on another repo and wanted to ensure was linked and accessible here :)
The inpainting model should work now without needing to rename the checkpoint. For anyone who wants to try it: https://colab.research.google.com/drive/1ayH6PUri-vvTXhaoL3NEZr_iVvv2qosR
Originally posted by @uservar in https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5011#issuecomment-1326954534
Yes, stable_diffusion2 is working now. And the few lines of code to get inference is in here: https://colab.research.google.com/drive/1Na9x7w7RSbk2UFbcnrnuurg7kFGeqBsa?usp=sharing
Originally posted by @hamzafar in https://github.com/huggingface/diffusers/issues/1388#issuecomment-1326610936
!pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
import torch
model_id = "stabilityai/stable-diffusion-2"
# Use the Euler scheduler here instead
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, revision="fp16", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "beautiful gaze"
image = pipe(prompt, height=768, width=768).images[0]
image
Alternatively, run stable_diffusion.ipynb.