LightX2V icon indicating copy to clipboard operation
LightX2V copied to clipboard

Light Video Generation Inference Framework

Results 78 LightX2V issues
Sort by recently updated
recently updated
newest added

torch.compile(dit_transformer, fullgraph=True,mode="max-autotune-no-cudagraphs", dynamic=False)

Hi team, I’m trying to maximize performance for WAN video generation on an **NVIDIA H100**, with the goal of getting as close to **realtime inference** as possible. I’ve been exploring...

没有找到对应的t2v的distill的模型 我看到有一个lightx2v/Wan2.2-Lightning,我将这个设成wan2.2的lora,load不成功 直接用Wan2.2-Lighting是可以跑通的,但是速度比较慢

`try: from spas_sage_attn.autotune import SparseAttentionMeansim except ImportError: logger.info("SparseAttentionMeansim not found, please install sparge first") SparseAttentionMeansim = None` When I use Wan2.1 to infer videos, I get the warning:"SparseAttentionMeansim not found,...

### Description Installed Everything.. Downloaded latest version.. Torch: 2.8.0+cu129 CUDA Available: True - CUDA Version (torch): 12.9 Xformers: 0.0.32.post2 FlashAttention: 2.8.0.post2 SageAttention: 2.2.0 nvcc Version: Build cuda_12.9.r12.9/compiler.36037853_0 ### Environment Information...

bug

"negative_prompt": "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly...

### Description I use docker registry.cn-hangzhou.aliyuncs.com/yongyang/lightx2v:25080601-cu124-SageSm89 Go run wan2.2+sageattention+lightx lora ### Steps to Reproduce 1. comfyui run wan2.2 ### Expected Result model normal inference ### Actual Result ``` Using 1053...

bug

I encountered an issue when using sage_attn2 together with the [Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v model](https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v). The generated video output is completely black in all frames. This issue does not occur when using flash_attn2.

bug

Found this restriction in the code: https://github.com/ModelTC/LightX2V/blob/4f3534923b64cc4fb3b7bbb8343fa224737ee6b5/lightx2v/models/networks/wan/infer/transformer_infer.py#L39 Why is sage_attn2 incompatible with H100 (compute capability 9.0) + CPU offload? Thank you for your time! Could you please explain the reasoning...