LightX2V
LightX2V copied to clipboard
Light Video Generation Inference Framework
### Description 我在运行scripts/wan/run_wan_skyreels_v2_df.sh的t2v时,出现错误 ### Steps to Reproduce 1. 我首先修改了scripts/wan/run_wan_skyreels_v2_df.sh中T2V命令的--model_cls参数为wan2.1_skyreels_v2_df 2. 运行该脚本,首先出现了AttributeError: 'EasyDict' object has no attribute 'target_shape'错误,我通过将wan_skyreels_v2_df_runner.py中的run_input_encoder()换为其他函数名可以解决。 3. 随后再运行该脚本时,报错为:TypeError: WanRunner.run_text_encoder() takes 3 positional arguments but 5 were given ### Log...
what's the different between rank32, rank 64, rank128?
按照教程:https://lightx2v-en.readthedocs.io/en/latest/deploy_guides/model_structure.html#standard-directory-structure 我的理解是最新的hf模型repo因为目录改成了distill_fp8, distill_int8,所以需要手动改回来。 然后original/ 这个目录到底需要放什么文件我没看明白,是应该把wan2.1官方模型放进来吗?可是教程里面又有提到distill_model.safetensors。 https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/tree/main/distill_models 这个目录又是做什么的? 因为我目前设置好,在没有加量化的情况下,run_wan_i2v_distill_4step_cfg.sh 出来的视频是全黑的,不知道是什么问题 谢谢!
Hi, authors, I don't find the related weight or path to download. So how run the pipeline? Thanks for your help.
您好,关于i2v-14b模型蒸馏相关问题想请教一下。 1、请问cfg和step蒸馏是同时完成的吗,也就是没有改模型结构来训练一个cfg-embedding,先做cfg再做step蒸馏。还有蒸馏的方案是dmd2吗? 2、cfg蒸馏时候,选的值默认是5.0吗,还是会在一定范围内随机选值。 3、请问i2v-14b-720p蒸馏时候用到的数据量大概是多少,使用了多少资源训练的?
Any plan for WAN2.2 Self-Forcing? I'm really looking forward to it.
Hey, can you share some details on how long / how many gpus were needed to train these loras?