diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

`.from_single_file` breaking in v28.2 if model_index.json is empty, unintuitive logs

Open underscore-en opened this issue 1 year ago • 9 comments

Describe the bug

I've been using diffuser for couple months, primarily for StableDiffusionPipeline and StableDiffusionXLPipeline image inference using a downloaded .safetensors A while back I upgraded to v28.2 and .from_single_file complained (refer to logs)

I did some debugging and concluded that for whatever reason my .cache/hf/hub/model-sth-xl-base/snapshot/model_index.json is empty, which caused this error (keys in the json is used to populate sth).

Logs are really not intuitive. For whatever reason this isn't an issue in v27.2.

Reproduction

empty the JSON in .cache/hf/hub/model-sth-xl-base/snapshot/model_index.json

Logs

File ".....\.venv\lib\site-packages\diffusers\loaders\single_file.py", line 529, in from_single_file
    raise ValueError(
ValueError: Pipeline <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> expected {'image_encoder', 'scheduler', 'feature_extractor', 'unet', 'text_encoder', 'tokenizer', 'vae', 'safety_checker'}, but only set() were passed.
(.venv)

System Info

python 3.10 in windows .venv

Who can help?

No response

underscore-en avatar Jun 11 '24 16:06 underscore-en

Cc: @DN6

sayakpaul avatar Jun 18 '24 15:06 sayakpaul

Hi @underscore-en we refactored from_single_file in version 0.28.0 to rely on the model-index.json to configure the pipeline, which could be what's causing the issue.

Is it possible to host your checkpoint on the HF Hub and share it? Additionally could you share a minimal code example we could use to reproduce the issue?

DN6 avatar Jul 01 '24 10:07 DN6

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Sep 14 '24 15:09 github-actions[bot]

I have same issue. Especially, I am using sd 1.5. I will load more than one models into gpu memory. The first model loading using from_single_file will success, but the second will fail due to no model_index.json on openai/clip-vit-large-patch14 which is the default text encoder. If I exchange the loading order of the two models, the second still fail.

junnikokuki avatar Oct 10 '24 04:10 junnikokuki

Hi @junnikokuki could you please share a code snippet that can reproduce the error you're describing. It's not clear to me how are you doing this, you're reloading the whole pipeline or just changing the modules in it?

asomoza avatar Oct 10 '24 12:10 asomoza

Hi @junnikokuki could you please share a code snippet that can reproduce the error you're describing. It's not clear to me how are you doing this, you're reloading the whole pipeline or just changing the modules in it?

Code:

from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler
import torch

controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16).to("cuda")
model_path = "./realcartoonAnime_v5.safetensors"
pipe = StableDiffusionControlNetImg2ImgPipeline.from_single_file(
        model_path,
        original_config="./v1-inference.yaml",
        controlnet=controlnet,
        from_safetensors=True,
        torch_dtype=torch.float16, 
        local_files_only=True,
    )
pipe.to("cuda", torch_dtype=torch.float16)

model_path2 = "./dreamshaperV8.safetensors"
pipe2 = StableDiffusionControlNetImg2ImgPipeline.from_single_file(
        model_path2,
        original_config="./v1-inference.yaml",
        controlnet=controlnet,
        from_safetensors=True,
        torch_dtype=torch.float16,
	local_files_only=True,
    )
pipe2.to("cuda", torch_dtype=torch.float16)

v1-inference.yaml is from: https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/blob/main/v1-inference.yaml

realcartoonAnime_v5 pruned version is from https://civitai.com/models/96629?modelVersionId=160438

dreamshaperV8 is from: https://civitai.com/models/4384?modelVersionId=128713

diffusers ver 0.28.0 and afterwards.

junnikokuki avatar Oct 11 '24 02:10 junnikokuki

You can first use local_files_only=False to download the cache.

junnikokuki avatar Oct 11 '24 02:10 junnikokuki

@junnikokuki Is the issue resolved after downloading to cache?

DN6 avatar Oct 15 '24 09:10 DN6

@junnikokuki Is the issue resolved after downloading to cache?

No, the openai/clip-vit-large-patch14 is already cached and it does not have model-index.json officially.

junnikokuki avatar Oct 16 '24 09:10 junnikokuki

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Nov 09 '24 15:11 github-actions[bot]

Gently pinging @DN6 again.

sayakpaul avatar Nov 09 '24 15:11 sayakpaul

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Dec 04 '24 15:12 github-actions[bot]

Seems like this was fixed recently. Closing this. Feel free to re-open if not.

sayakpaul avatar Dec 04 '24 16:12 sayakpaul