LightX2V icon indicating copy to clipboard operation
LightX2V copied to clipboard

give me an example how to use diffusers.WanPipeline.from_pretrained with lightx2v/Wan2.1-Distill-Models', 'wan2.1_t2v_14b_scaled_fp8_e4m3_lightx2v_4step.safetensors'

Open chaowenguo opened this issue 1 month ago • 0 comments

give me an example how to use diffusers.WanPipeline.from_pretrained with lightx2v/Wan2.1-Distill-Models', 'wan2.1_t2v_14b_scaled_fp8_e4m3_lightx2v_4step.safetensors'

model_id = 'Wan-AI/Wan2.1-T2V-14B-Diffusers' I try pipe = diffusers.WanPipeline.from_pretrained(model_dir, vae=diffusers.AutoencoderKLWan.from_pretrained(model_dir, subfolder='vae', torch_dtype=torch.float32).to(onload_device), transformer=diffusers.WanTransformer3DModel.from_single_file(model_file_download('lightx2v/Wan2.1-Distill-Models', 'wan2.1_t2v_14b_scaled_fp8_e4m3_lightx2v_4step.safetensors'), config=builtins.str(pathlib.Path(model_dir).joinpath('transformer')), local_files_only=True, quantization_config=diffusers.GGUFQuantizationConfig(compute_dtype=torch.bfloat16), torch_dtype=torch.bfloat16), text_encoder=transformers.UMT5EncoderModel.from_pretrained(model_dir, subfolder='text_encoder', gguf_file=model_file_download('chaowenguo/umt5-xxl-encoder-gguf', 'umt5-xxl-encoder-Q8_0.gguf'), torch_dtype=torch.bfloat16), torch_dtype=torch.bfloat16)

chaowenguo avatar Nov 05 '25 11:11 chaowenguo