diffusers
diffusers copied to clipboard
`.from_single_file` breaking in v28.2 if model_index.json is empty, unintuitive logs
Describe the bug
I've been using diffuser for couple months, primarily for StableDiffusionPipeline and StableDiffusionXLPipeline image inference using a downloaded .safetensors
A while back I upgraded to v28.2 and .from_single_file complained (refer to logs)
I did some debugging and concluded that for whatever reason my .cache/hf/hub/model-sth-xl-base/snapshot/model_index.json is empty, which caused this error (keys in the json is used to populate sth).
Logs are really not intuitive. For whatever reason this isn't an issue in v27.2.
Reproduction
empty the JSON in .cache/hf/hub/model-sth-xl-base/snapshot/model_index.json
Logs
File ".....\.venv\lib\site-packages\diffusers\loaders\single_file.py", line 529, in from_single_file
raise ValueError(
ValueError: Pipeline <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> expected {'image_encoder', 'scheduler', 'feature_extractor', 'unet', 'text_encoder', 'tokenizer', 'vae', 'safety_checker'}, but only set() were passed.
(.venv)
System Info
python 3.10 in windows .venv
Who can help?
No response
Cc: @DN6
Hi @underscore-en we refactored from_single_file in version 0.28.0 to rely on the model-index.json to configure the pipeline, which could be what's causing the issue.
Is it possible to host your checkpoint on the HF Hub and share it? Additionally could you share a minimal code example we could use to reproduce the issue?
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
I have same issue. Especially, I am using sd 1.5. I will load more than one models into gpu memory. The first model loading using from_single_file will success, but the second will fail due to no model_index.json on openai/clip-vit-large-patch14 which is the default text encoder. If I exchange the loading order of the two models, the second still fail.
Hi @junnikokuki could you please share a code snippet that can reproduce the error you're describing. It's not clear to me how are you doing this, you're reloading the whole pipeline or just changing the modules in it?
Hi @junnikokuki could you please share a code snippet that can reproduce the error you're describing. It's not clear to me how are you doing this, you're reloading the whole pipeline or just changing the modules in it?
Code:
from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler
import torch
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16).to("cuda")
model_path = "./realcartoonAnime_v5.safetensors"
pipe = StableDiffusionControlNetImg2ImgPipeline.from_single_file(
model_path,
original_config="./v1-inference.yaml",
controlnet=controlnet,
from_safetensors=True,
torch_dtype=torch.float16,
local_files_only=True,
)
pipe.to("cuda", torch_dtype=torch.float16)
model_path2 = "./dreamshaperV8.safetensors"
pipe2 = StableDiffusionControlNetImg2ImgPipeline.from_single_file(
model_path2,
original_config="./v1-inference.yaml",
controlnet=controlnet,
from_safetensors=True,
torch_dtype=torch.float16,
local_files_only=True,
)
pipe2.to("cuda", torch_dtype=torch.float16)
v1-inference.yaml is from: https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/blob/main/v1-inference.yaml
realcartoonAnime_v5 pruned version is from https://civitai.com/models/96629?modelVersionId=160438
dreamshaperV8 is from: https://civitai.com/models/4384?modelVersionId=128713
diffusers ver 0.28.0 and afterwards.
You can first use local_files_only=False to download the cache.
@junnikokuki Is the issue resolved after downloading to cache?
@junnikokuki Is the issue resolved after downloading to cache?
No, the openai/clip-vit-large-patch14 is already cached and it does not have model-index.json officially.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Gently pinging @DN6 again.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Seems like this was fixed recently. Closing this. Feel free to re-open if not.