chaowenguo
chaowenguo
```python3 diffusers.WanVACEPipeline.from_pretrained('linoyts/Wan2.2-VACE-Fun-14B-diffusers', vae=diffusers.AutoencoderKLWan.from_pretrained('linoyts/Wan2.2-VACE-Fun-14B-diffusers', subfolder='vae', torch_dtype=torch.float32), torch_dtype=torch.bfloat16, quantization_config=diffusers.PipelineQuantizationConfig(quant_backend='bitsandbytes_8bit', quant_kwargs={'load_in_8bit':True}, components_to_quantize=['transformer', 'transformer_2'])).save_pretrained('wan') ``` normally I can save the quantization model in this way But now I want to merge lora and the...
```python3 !python3 -m pip install -U diffusers peft bitsandbytes import diffusers, torch, math qwen = diffusers.QwenImagePipeline.from_pretrained('Qwen/Qwen-Image', torch_dtype=torch.float16, low_cpu_mem_usage=True, quantization_config=diffusers.PipelineQuantizationConfig(quant_backend='bitsandbytes_4bit', quant_kwargs={'load_in_4bit':True, 'bnb_4bit_quant_type':'nf4', 'bnb_4bit_compute_dtype':torch.float16}, components_to_quantize=['transformer', 'text_encoder'])) qwen.scheduler = diffusers.FlowMatchEulerDiscreteScheduler.from_config({'base_image_seq_len':256, 'base_shift':math.log(3), 'invert_sigmas':False, 'max_image_seq_len':8192,...
https://huggingface.co/lightx2v/Wan2.2-Distill-Models, I do not see any t2v version. do you have t2v distill model?
give me an example how to load models_t5_umt5-xxl-enc-fp8.safetensors into transformers.UMT5EncoderModel.from_pretrained, it keep complain mat1 and mat2 size not match
give me an example how to use diffusers.WanPipeline.from_pretrained with lightx2v/Wan2.1-Distill-Models', 'wan2.1_t2v_14b_scaled_fp8_e4m3_lightx2v_4step.safetensors' model_id = 'Wan-AI/Wan2.1-T2V-14B-Diffusers' I try pipe = diffusers.WanPipeline.from_pretrained(model_dir, vae=diffusers.AutoencoderKLWan.from_pretrained(model_dir, subfolder='vae', torch_dtype=torch.float32).to(onload_device), transformer=diffusers.WanTransformer3DModel.from_single_file(model_file_download('lightx2v/Wan2.1-Distill-Models', 'wan2.1_t2v_14b_scaled_fp8_e4m3_lightx2v_4step.safetensors'), config=builtins.str(pathlib.Path(model_dir).joinpath('transformer')), local_files_only=True, quantization_config=diffusers.GGUFQuantizationConfig(compute_dtype=torch.bfloat16), torch_dtype=torch.bfloat16), text_encoder=transformers.UMT5EncoderModel.from_pretrained(model_dir, subfolder='text_encoder',...