diffusers
diffusers copied to clipboard
StableDiffusionXLControlNetInpaintPipeline.from_single_file Error too!!!
Describe the bug
diffuers version: 0.27.1
when run StableDiffusionXLControlNetInpaintPipeline.from_single_file, error occur:
self.pipe = StableDiffusionXLControlNetInpaintPipeline.from_single_file(model_address['sdxl3D'],
File "aaaa/anaconda/envs/sd-webui/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "aaaa/anaconda/envs/sd-webui/lib/python3.10/site-packages/diffusers/loaders/single_file.py", line 289, in from_single_file
components = build_sub_model_components(
File "aaaa/anaconda/envs/sd-webui/lib/python3.10/site-packages/diffusers/loaders/single_file.py", line 61, in build_sub_model_components
unet_components = create_diffusers_unet_model_from_ldm(
File "aaaa/anaconda/envs/sd-webui/lib/python3.10/site-packages/diffusers/loaders/single_file_utils.py", line 1322, in create_diffusers_unet_model_from_ldm
unexpected_keys = load_model_dict_into_meta(unet, diffusers_format_unet_checkpoint, dtype=torch_dtype)
File "aaaa/anaconda/envs/sd-webui/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 152, in load_model_dict_into_meta
raise ValueError(
ValueError: Cannot load because conv_in.weight expected shape tensor(..., device='meta', size=(320, 9, 3, 3)), but got torch.Size([320, 4, 3, 3]). If you want to instead overwrite randomly initialized weights, please make sure to pass both `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True`. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example.
Reproduction
# Load safesentors
self.pipe = StableDiffusionXLControlNetInpaintPipeline.from_single_file(model_address['sdxl3D'],
original_config_file=config_adress['sdxl-base-config'],
torch_dtype=torch.float16,
local_files_only=True,
use_safetensors=True,
add_watermarker=False,
controlnet=self.controlnets,
)
self.pipe.enable_xformers_memory_efficient_attention() # xformer加速
# self.pipe.unet = torch.compile(self.pipe.unet, mode="reduce-overhead", fullgraph=True) # torch.compile加速
self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config)
self.pipe.enable_model_cpu_offload()
Logs
No response
System Info
1
Who can help?
No response
I add num_in_channels=4 below:
# Load safesentors
self.pipe = StableDiffusionXLControlNetInpaintPipeline.from_single_file(model_address['sdxl卡哇伊3D'],
original_config_file=config_adress['sdxl-base-config'],
torch_dtype=torch.float16,
local_files_only=True,
use_safetensors=True,
add_watermarker=False,
controlnet=self.controlnets,
num_in_channels=4
)
load sucess
but other error occur:
RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.HalfTensor instead (while checking arguments for embedding)
Could it be that your base model and the base model of ControlNet are mismatched? That's just my guess.
Unable to reproduce: https://colab.research.google.com/gist/sayakpaul/ee9928961d26ac18d5b5c1e15363cf2a/scratchpad.ipynb?authuser=1.
Cc: @DN6.
Unable to reproduce: https://colab.research.google.com/gist/sayakpaul/ee9928961d26ac18d5b5c1e15363cf2a/scratchpad.ipynb?authuser=1.
Cc: @DN6.
it occur error, but 0.26.3 is OK
You must pass num_channels=4 because the checkpoint you are initializing it from has 4 channels in the UNet in the stem block.
You must pass
num_channels=4because the checkpoint you are initializing it from has 4 channels in the UNet in the stem block.
thanks, but 0.26.3 is OK, and no setup is required num_channels=4, Why does this difference exist?
Cc: @DN6
@dengfenglai321 I tried running the following snippet with version 0.26.3 and the error is still raised.
from diffusers import ControlNetModel, StableDiffusionXLControlNetInpaintPipeline
controlnet = ControlNetModel.from_pretrained("diffusers/controlnet-canny-sdxl-1.0")
pipe = StableDiffusionXLControlNetInpaintPipeline.from_single_file(
"https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors",
controlnet=controlnet,
)
Can you confirm the version you're using where num_in_channels isn't required? Can you run the following command and share the output
diffusers-cli env
It is likely that you were using a version earlier than 0.26.0
@DN6 I'm getting this on 0.26.1, looks like num_in_channels/num_channels/in_channels is ignored when using from_pretrained:
Keyword arguments {'num_in_channels': 9, 'ignore_mismatched_sizes': True} are not expected by StableDiffusionXLControlNetInpaintPipeline and will be ignored.
Any ideas how I can fix this?
Using this model (https://civitai.com/models/139562?modelVersionId=297320) which I converted to diffusers format and diffusers/controlnet-canny-sdxl-1.0 for controlnet
You are not supposed to be passing num_channels when using from_pretrained(). Please provide minimal yet reproducible code snippet.
This should show you the problem:
controlnet = ControlNetModel.from_pretrained("diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetInpaintPipeline.from_pretrained(
"tiimgreen/real-vis-v30-inpaint-sdxl",
controlnet=controlnet,
torch_dtype=torch.float16,
).to(device)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_xformers_memory_efficient_attention()
tiimgreen/real-vis-v30-inpaint-sdxl generated by running:
python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path realvisxlV40_v30InpaintBakedvae.safetensors --dump_path ./real-vis-v30-inpaint-sdxl --from_safetensors --pipeline_class_name StableDiffusionXLControlNetInpaintPipeline
I don't understand the problem here it seems the notebook @sayakpaul provided completely solves the issue, no? https://colab.research.google.com/gist/sayakpaul/ee9928961d26ac18d5b5c1e15363cf2a/scratchpad.ipynb?authuser=1.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
can we close this now?
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.