LightX2V
LightX2V copied to clipboard
Light Video Generation Inference Framework
Based on the original code using the official wan-i2v code, what modifications are needed?
Hi! I am looking into Self-Forcing-Plus repository implementing self-forcing with dmd. - https://github.com/GoatWu/Self-Forcing-Plus/tree/wan22 @GoatWu - There are 2 training pipelines supported in SelfForcinModel, and wonder how SelfForcingTrainingPipeline and BidirectionalTrainingPipeline work...
Doesnt see torch module. 启动LightX2V I2V推理... Warn: CUDA_VISIBLE_DEVICES is not set, using default value: , change at shell script or set env variable. 环境变量设置完成! PYTHONPATH: D:\Git\LightX2V; CUDA_VISIBLE_DEVICES: 模型路径: D:\Git\LightX2V\models\Wan2.1-I2V-14B-480P-Lightx2v Traceback...
Its it possible to use something like this? Returns 12 channels instead of 3 so the tensor copy complains currently. https://huggingface.co/spacepxl/Wan2.1-VAE-upscale2x
### Description when execute `python lightx2v_kernel/test/nvfp4_nvfp4/test_bench1.py` meet **torch.AcceleratorError: CUDA error: an illegal memory access was encountered** error ### Steps to Reproduce 1. docker image: lightx2v/lightx2v:25111101-cu128 2. lightx2v commit: 63f0486f11913ce1d3cf0d79ed1b70c3ce2d1545 3....
https://huggingface.co/lightx2v/Wan2.2-Distill-Models, I do not see any t2v version. do you have t2v distill model?
in the docker lightx2v/lightx2v:25111101-cu128 vLLM version 0.11.1rc2 is not an official release U should note it, in some envs we cannot use docker When I use release version, meet the...
Thank you very much for your work. The acceleration lora you trained has worked extremely well. I learned from huggingface that you trained using the GoatWu/self-Forcing-Plus repository. But when I...