IF
IF copied to clipboard
ValueError: CLIPVisionModelWithProjection does not support `device_map='auto'`. To implement support, the model class needs to implement the `_no_split_modules` attribute.
Hello, I am trying to run the code on the colab provided. I have not change anything in the code yet. after I ran the part:
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"DeepFloyd/IF-I-XL-v1.0",
text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder
unet=None,
device_map="auto"
)
I got the following error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-8-55e2717ce150>](https://localhost:8080/#) in <cell line: 3>()
1 from diffusers import DiffusionPipeline
2
----> 3 pipe = DiffusionPipeline.from_pretrained(
4 "DeepFloyd/IF-I-XL-v1.0",
5 text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder
3 frames
[/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in _get_no_split_modules(self, device_map)
1688 if isinstance(module, PreTrainedModel):
1689 if module._no_split_modules is None:
-> 1690 raise ValueError(
1691 f"{module.__class__.__name__} does not support `device_map='{device_map}'`. To implement support, the model "
1692 "class needs to implement the `_no_split_modules` attribute."
ValueError: CLIPVisionModelWithProjection does not support `device_map='auto'`. To implement support, the model class needs to implement the `_no_split_modules` attribute.
and it only worked after I removed the device_map="auto".
I'm seeing the same thing...
I got the same error.
I got the same error. May I know when it will be resolved?
这是来自QQ邮箱的假期自动回复邮件。你好,我最近正在休假中,无法亲自回复你的邮件。我将在假期结束后,尽快给你回复。
这是来自QQ邮箱的假期自动回复邮件。你好,我最近正在休假中,无法亲自回复你的邮件。我将在假期结束后,尽快给你回复。
If anyone else runs into this, I fit in
pipe = DiffusionPipeline.from_pretrained( "DeepFloyd/IF-I-XL-v1.0", text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder unet=None, max_memory={0:"15GB","cpu": "13GB"} )
for that block and it worked. I also ran into a followup bug where I needed to add .to("cuda") to the pipelines as float16 is only compatible with GPU but it points to both GPU and CPU causing failures. On step 1.4, put:
pipe = DiffusionPipeline.from_pretrained( "DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 ).to("cuda")
and do similarly for other steps; you should be able to generate an image on colab.