LightX2V
LightX2V copied to clipboard
Light Video Generation Inference Framework
### Description Briefly describe the bug you encountered. Meet Error while test AdaCache 'WanTransformerInferAdaCaching' object has no attribute 'infer_modulation' ### Steps to Reproduce run script modified by run_wan_i2v_tea.sh: python -m...
使用distill_models按原本的config去跑wan_moe_i2v_distill.json生成的视频噪声极大,惨不忍睹。 { "infer_steps": 12, "target_video_length": 81, "text_len": 512, "target_height": 720, "target_width": 1280, "self_attn_1_type": "flash_attn3", "cross_attn_1_type": "flash_attn3", "cross_attn_2_type": "flash_attn3", "sample_guide_scale": [ 5.0, 9.0 ], "sample_shift": 0.5, "enable_cfg": true, "cpu_offload": true, "offload_granularity":...
give me an example how to load models_t5_umt5-xxl-enc-fp8.safetensors into transformers.UMT5EncoderModel.from_pretrained, it keep complain mat1 and mat2 size not match
give me an example how to use diffusers.WanPipeline.from_pretrained with lightx2v/Wan2.1-Distill-Models', 'wan2.1_t2v_14b_scaled_fp8_e4m3_lightx2v_4step.safetensors' model_id = 'Wan-AI/Wan2.1-T2V-14B-Diffusers' I try pipe = diffusers.WanPipeline.from_pretrained(model_dir, vae=diffusers.AutoencoderKLWan.from_pretrained(model_dir, subfolder='vae', torch_dtype=torch.float32).to(onload_device), transformer=diffusers.WanTransformer3DModel.from_single_file(model_file_download('lightx2v/Wan2.1-Distill-Models', 'wan2.1_t2v_14b_scaled_fp8_e4m3_lightx2v_4step.safetensors'), config=builtins.str(pathlib.Path(model_dir).joinpath('transformer')), local_files_only=True, quantization_config=diffusers.GGUFQuantizationConfig(compute_dtype=torch.bfloat16), torch_dtype=torch.bfloat16), text_encoder=transformers.UMT5EncoderModel.from_pretrained(model_dir, subfolder='text_encoder',...
i have this error when i open in comfyui ___________________________ [Prompt Server] web root: /workspace/ComfyUI/venv/lib/python3.12/site-packages/comfyui_frontend_package/static Traceback (most recent call last): File "/workspace/ComfyUI/nodes.py", line 2131, in load_custom_node module_spec.loader.exec_module(module) File "", line...
配置如下: 报错: 使用的是非FP8的蒸馏模型 去掉 "feature_caching": "Ada", 视频生成正常
Hi, I have difficulties to understand how to map all config files in the following folder https://github.com/ModelTC/LightX2V/tree/837feba7d0fe32e4f8ca05024ad125aed4a31ca9/configs/wan22 and the checkpoints I find in https://huggingface.co/lightx2v/Wan2.2-Distill-Models/tree/main 1/ Should I use [wan_moe_i2v_distill_quant.json ](https://github.com/ModelTC/LightX2V/blob/837feba7d0fe32e4f8ca05024ad125aed4a31ca9/configs/wan22/wan_moe_i2v_distill_quant.json)...
### Description I face the `torch.AcceleratorError: CUDA error: no kernel image is available` error when running the provided script `bash wan22/run_wan22_moe_i2v_distill.sh` ### Steps to Reproduce 1. Pull the official docker...
packages=find_packages(include=["lightx2v", "lightx2v.*"]), 命令没有包含所有代码模块。 input_encoders与video_encoders两个包里面没有init.py,没有包含进去
This is really weird. I'm using Wan2.2-I2V-A14B-Q8_0 gguf models with native I2V workflows. When Wan2.2-Lightning loras (Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1) are used, everything works fine. Whereas, when Wan2.2-Distill-Loras (wan2.2_i2v_A14b_lora_rank64_lightx2v_4step_1022) are used, comfyui gets...