[WIP] [Core] Add support for ControlNet LoRA
What does this PR do?
Adds support for ControlNet LoRA. continuing on @sayakpaul design #4899
Fixes #5800
Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the contributor guideline?
- [ ] Did you read our philosophy doc (important for complex PRs)?
- [ ] Was this discussed/approved via a GitHub issue or the forum? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
- [ ] Did you write any new necessary tests?
The current design
Load things
from diffusers import (
StableDiffusionXLControlNetPipeline,
ControlNetModel,
UNet2DConditionModel
)
import torch
pipe_id = "stabilityai/stable-diffusion-xl-base-1.0"
lora_id = "stabilityai/control-lora"
lora_filename = "control-LoRAs-rank128/control-lora-canny-rank128.safetensors"
unet = UNet2DConditionModel.from_pretrained(pipe_id, subfolder="unet", torch_dtype=torch.float16).to("cuda")
controlnet = ControlNetModel.from_unet(unet).to(device="cuda", dtype=torch.float16)
controlnet.load_lora_weights(lora_id, weight_name=lora_filename, controlnet_config=controlnet.config)
Infer
from diffusers import AutoencoderKL
from diffusers.utils import load_image, make_image_grid
from PIL import Image
import numpy as np
import cv2
prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
negative_prompt = "low quality, bad quality, sketches"
image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png")
controlnet_conditioning_scale = 0.5 # recommended for good generalization
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
pipe_id,
unet=unet,
controlnet=controlnet,
vae=vae,
torch_dtype=torch.float16,
).to("cuda")
image = np.array(image)
image = cv2.Canny(image, 100, 200)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
image = Image.fromarray(image)
images = pipe(
prompt, negative_prompt=negative_prompt, image=image,
controlnet_conditioning_scale=controlnet_conditioning_scale,
num_images_per_prompt=4
).images
final_image = [image] + images
grid = make_image_grid(final_image, 1, 5)
grid
Who can review?
@sayakpaul can you provide me with access to the notebook you mentioned
Here you go: https://colab.research.google.com/drive/1S-NDshYL7N4S1ugF9Y86d3SRY-Fe0Qe-?usp=sharing
Hi folks, are there any other updates on the PR?
+1 for this
We don't have the bandwidth for this at the moment and hence https://github.com/huggingface/diffusers/pull/4899
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
@sayakpaul @shauray8 @George0726 @andypotato Requesting assistance from everyone, thank you very much. controlnet = ControlNetModel.from_unet(unet).to(device="cuda", dtype=torch.float16) controlnet.load_lora_weights(lora_id, weight_name=lora_filename, controlnet_config=controlnet.config)
The following error occurred during runtime: AttributeError: 'ControlNetModel' object has no attribute 'load_lora_weights' Is this due to a version issue? May I know which version you are using? My running version is as follows: diffusers 0.27.0.dev0 transformers 4.35.2
This feature hasn't been shipped yet.
This feature hasn't been shipped yet.
@sayakpaul Is there a test version that has been implemented that can be used? thanks!