LightX2V
LightX2V copied to clipboard
Light Video Generation Inference Framework
当前文档中关于custom cache的coefficients都是wan2.1模型,请问有wan2.2的相关系数么?文档是否可以更新下呢
I am running run_wan22_moe_i2v_cfg_ulysses.sh on 4× RTX 3090 GPUs, but the process gets stuck when loading the high and low models. I am using the configuration file wan22_moe_i2v_cfg_ulysses.json under the...
There is a new SOTA distillation method. We already have it for Wan2.1 here https://huggingface.co/worstcoder/rcm-Wan/tree/main Can we get it for Wan2.2?
Hi! I have a couple of questions regarding T2V for WAN2.2. Looking at the huggingface repo I only see new distilled models for I2V WAN2.2. I do see the wan...
does lightx2v support this models? - PAI/Wan2.2-Fun-A14B-InP - alibaba-pai/Wan2.2-Fun-A14B-Control - PAI/Wan2.2-VACE-Fun-A14B if not ,how to edit source code to support these model?
有没有计划蒸馏wan2.2 t2v的模型呢?
出来的视频全黑屏
hunyuan_t2v_distill.py这个代码的运行结果全黑屏的视频,该怎么定位 https://github.com/user-attachments/assets/81ad5910-7f93-487e-89b3-1d3b73dcac23
The config file contains files related to wan_moe_i2v_audio, but there is no documentation indicating that wan2.2 s2v can be used. The s2v task in the code seems to point to...
We are using 4 L20 GPUs for inference, but both the VAE encoding and decoding processes are taking a long time. Setting parallel_vae to true did not bring any improvement.
I am trying to load HunyuanVideo-1.5 using LightX2V and i structered this repo to have everything required: https://huggingface.co/MohamedRashad/HunyuanVideo-1.5-complete/tree/main but the problem is that it still gives me error ``` Exit...