quanwei zhang

Results 9 comments of quanwei zhang

@ashawkey Thank you for your reply. The os is CentOS Linux release 7.9.2009 (Core) or Linux mgt 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux....

Hi, I also have this question, have you resolved it?

> 对于wan2.1,在一些开源video-caption数据集上,比如vidgen,koala36m,大概最终的runnning_mean_loss是多少呢 For wan2.1 trained on open-source datasets such as vidgen and koala36m using lora, what is the ultimate running_mean_loss? 我训练lora 10多个视频那种 可以到0.03-0.06 数字人视频也是 大致在这个范围波动

一样的 收敛都在这个数值附近 我用了一万个左右的样本 ------------------ Original ------------------ From: AliothChen ***@***.***> Date: Wed,Sep 24,2025 10:17 PM To: modelscope/DiffSynth-Studio ***@***.***> Cc: quanwei zhang ***@***.***>, Comment ***@***.***> Subject: Re: [modelscope/DiffSynth-Studio] Wan2.1 lora training loss...

没测过几个 大概2 3个就有效果了吧epoch 学习率2e-5  ------------------ Original ------------------ From: AliothChen ***@***.***> Date: Wed,Sep 24,2025 10:52 PM To: modelscope/DiffSynth-Studio ***@***.***> Cc: quanwei zhang ***@***.***>, Comment ***@***.***> Subject: Re: [modelscope/DiffSynth-Studio] Wan2.1 lora training...

我用的64卡 几个小时就可以了 分辨率480*720 ------------------ Original ------------------ From: SensenGao ***@***.***> Date: Mon,Oct 13,2025 5:27 PM To: modelscope/DiffSynth-Studio ***@***.***> Cc: quanwei zhang ***@***.***>, Comment ***@***.***> Subject: Re: [modelscope/DiffSynth-Studio] Wan2.1 lora training loss...

14B模型的 ------------------ Original ------------------ From: SensenGao ***@***.***> Date: Mon,Oct 13,2025 5:33 PM To: modelscope/DiffSynth-Studio ***@***.***> Cc: quanwei zhang ***@***.***>, Comment ***@***.***> Subject: Re: [modelscope/DiffSynth-Studio] Wan2.1 lora training loss (Issue#943) SensenGao...

我也遇到了这个同样的问题,在deepspeed zero3阶段,只用zero2 又只能17帧