DiffSynth-Studio
DiffSynth-Studio copied to clipboard
We cannot detect the model type. No models are loaded.
model_manager.load_models([ "models/lightning_logs/version_2/checkpoints/epoch=9-step=5000.ckpt", "models/Wan-AI/Wan2.1-T2V-14B/models_t5_umt5-xxl-enc-bf16.pth", "models/Wan-AI/Wan2.1-T2V-14B/Wan2.1_VAE.pth", ]) When training your own CKPT and following the tutorial to run test.by, Loading models from: models/lightning_logs/version_2/checkpoints/epoch=9-step=5000.ckpt We cannot detect the model type. No models are loaded.
maybe you should check your lora model "*.ckpt" should be a file but not a folder. if you use deepspeed, it may save as a folder
您好,您的邮件已收到,谢谢! 余清
@23332323 Lora models should be loaded after the base model.
model_manager.load_models([
"models/Wan-AI/Wan2.1-T2V-1.3B/diffusion_pytorch_model.safetensors",
"models/Wan-AI/Wan2.1-T2V-1.3B/models_t5_umt5-xxl-enc-bf16.pth",
"models/Wan-AI/Wan2.1-T2V-1.3B/Wan2.1_VAE.pth",
])
model_manager.load_lora("models/lightning_logs/version_1/checkpoints/epoch=0-step=500.ckpt", lora_alpha=1.0)
I use deepspeed to train i2v-14b model, but only optimizer is saved, I cannot find any model file.
您好,您的邮件已收到,谢谢! 余清
遇到了相同的问题,请问现在有解决方案吗?我是load微调的模型,并没有load成功
model_manager.load_models([ "models/lightning_logs/version_2/checkpoints/epoch=9-step=5000.ckpt", "models/Wan-AI/Wan2.1-T2V-14B/models_t5_umt5-xxl-enc-bf16.pth", "models/Wan-AI/Wan2.1-T2V-14B/Wan2.1_VAE.pth", ]) When training your own CKPT and following the tutorial to run test.by, Loading models from: models/lightning_logs/version_2/checkpoints/epoch=9-step=5000.ckpt We cannot detect the model type. No models are loaded.
Hello, I have encountered a similar problem. I would like to ask, can you share the code you modified to load lora for testing? Thank you
您好,您的邮件已收到,谢谢! 余清
I use deepspeed to train i2v-14b model, but only optimizer is saved, I cannot find any model file.
![]()
Hello, I also encountered a similar problem. I trained and got a weight file similar to this one. That is, the slice file of deepspeed. I would like to ask how you loaded it in the end? Thank you
I use deepspeed to train i2v-14b model, but only optimizer is saved, I cannot find any model file.
![]()
is there any solutions? can you share your code? @Artiprocher
after full training, i use zero_to_fp32.py convert *.pt to *.safetensors, when inference, i load the model, I also not report the log: No wan_video_dit models available. We cannot detect the model type. No models are loaded. How to fix it?
after full training, i use zero_to_fp32.py convert *.pt to *.safetensors, when inference, i load the model, I also not report the log: No wan_video_dit models available. We cannot detect the model type. No models are loaded. How to fix it?
I have no problem now
after full training, i use zero_to_fp32.py convert *.pt to *.safetensors, when inference, i load the model, I also not report the log: No wan_video_dit models available. We cannot detect the model type. No models are loaded. How to fix it?
I have no problem now
Could you share how it was resolved? thanks
after full training, i use zero_to_fp32.py convert *.pt to *.safetensors, when inference, i load the model, I also not report the log: No wan_video_dit models available. We cannot detect the model type. No models are loaded. How to fix it?
I have no problem now
Could you share how it was resolved? thanks
Emmm, I have encountered a similar problem. Have you solved it yet? Thank you
您好,您的邮件已收到,谢谢! 余清