magic-animate
magic-animate copied to clipboard
HeaderTooLarge
when I run the scripts I get this funny error:
/mnt/Samsung980Pro/WORK/AI/arch/magic-animate/magicanimate/pipelines/pipeline_animation.py:43: FutureWarning: Importing `DiffusionPipeline` or `ImagePipelineOutput` from diffusers.pipeline_utils is deprecated. Please import from diffusers.pipelines.pipeline_utils instead.
from diffusers.pipeline_utils import DiffusionPipeline
Traceback (most recent call last):
File "/mnt/Samsung980Pro/arch_linux_software/anaconda3/envs/manimate/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/mnt/Samsung980Pro/arch_linux_software/anaconda3/envs/manimate/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/mnt/Samsung980Pro/WORK/AI/arch/magic-animate/magicanimate/pipelines/animation.py", line 282, in <module>
run(args)
File "/mnt/Samsung980Pro/WORK/AI/arch/magic-animate/magicanimate/pipelines/animation.py", line 271, in run
main(args)
File "/mnt/Samsung980Pro/WORK/AI/arch/magic-animate/magicanimate/pipelines/animation.py", line 76, in main
text_encoder = CLIPTextModel.from_pretrained(config.pretrained_model_path, subfolder="text_encoder")
File "/mnt/Samsung980Pro/arch_linux_software/anaconda3/envs/manimate/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2805, in from_pretrained
state_dict = load_state_dict(resolved_archive_file)
File "/mnt/Samsung980Pro/arch_linux_software/anaconda3/envs/manimate/lib/python3.8/site-packages/transformers/modeling_utils.py", line 458, in load_state_dict
with safe_open(checkpoint_file, framework="pt") as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge
Which seems to me like a problem with the hierarchy of models, however, with git cloning the models directly from hugging face, this seems to match... I don't see the issue here:
├── MagicAnimate
│ ├── appearance_encoder
│ │ ├── config.json
│ │ └── diffusion_pytorch_model.safetensors
│ ├── densepose_controlnet
│ │ ├── config.json
│ │ └── diffusion_pytorch_model.safetensors
│ ├── README.md
│ └── temporal_attention
│ └── temporal_attention.ckpt
├── sd-vae-ft-mse
│ ├── config.json
│ ├── diffusion_pytorch_model.bin
│ ├── diffusion_pytorch_model.safetensors
│ └── README.md
└── stable-diffusion-v1-5
├── feature_extractor
│ └── preprocessor_config.json
├── model_index.json
├── README.md
├── safety_checker
│ ├── config.json
│ ├── model.fp16.safetensors
│ ├── model.safetensors
│ ├── pytorch_model.bin
│ └── pytorch_model.fp16.bin
├── scheduler
│ └── scheduler_config.json
├── text_encoder
│ ├── config.json
│ ├── model.fp16.safetensors
│ ├── model.safetensors
│ ├── pytorch_model.bin
│ └── pytorch_model.fp16.bin
├── tokenizer
│ ├── merges.txt
│ ├── special_tokens_map.json
│ ├── tokenizer_config.json
│ └── vocab.json
├── unet
│ ├── config.json
│ ├── diffusion_pytorch_model.bin
│ ├── diffusion_pytorch_model.fp16.bin
│ ├── diffusion_pytorch_model.fp16.safetensors
│ ├── diffusion_pytorch_model.non_ema.bin
│ ├── diffusion_pytorch_model.non_ema.safetensors
│ └── diffusion_pytorch_model.safetensors
├── v1-5-pruned.ckpt
├── v1-5-pruned-emaonly.ckpt
├── v1-5-pruned-emaonly.safetensors
├── v1-5-pruned.safetensors
├── v1-inference.yaml
└── vae
├── config.json
├── diffusion_pytorch_model.bin
├── diffusion_pytorch_model.fp16.bin
├── diffusion_pytorch_model.fp16.safetensors
└── diffusion_pytorch_model.safetensors
download stable-diffusion-v1-5/text_encoder/model.safetensors, not pytorch_model.bin
After cloning with “git lfs clone,” remember to use “git lfs pull” to download the weights.