diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

πŸ€— Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.

Results 1293 diffusers issues
Sort by recently updated
recently updated
newest added

### Describe the bug Cannot load LoRAs into quanto-quantized Flux. ```py import torch from diffusers import FluxTransformer2DModel, FluxPipeline from huggingface_hub import hf_hub_download from optimum.quanto import qfloat8, quantize, freeze from transformers...

bug

I see that the current version(0.36.0-dev) only updated the text-to-image pipeline for Flux2. We are looking forward to the update of the image-to-image pipeline!

Just an idea. It's not a problem or anything... I've be using a custom offload for my potato GPU. Maybe there is another way to do it or so... In...

Currently, ZImagePipeline does not support applying LoRA models. This limits the flexibility of the pipeline when trying to use specific styles or customized models. I would like to request native...

### Describe the bug ## Hi, I am trying to optimize Wan 2.2 T2V / I2V inference speed on a single RTX 4090, using: 1 Wan2.2 (Diffusers) 2 LightX2V LoRA...

bug

### Describe the bug If you try to use ZImagePipeline with batch sizes above 1, it fails with an Assertion error ``` ### Reproduction import torch from diffusers import ZImagePipeline...

bug

```python3 !python3 -m pip install -U diffusers peft bitsandbytes import diffusers, torch, math qwen = diffusers.QwenImagePipeline.from_pretrained('Qwen/Qwen-Image', torch_dtype=torch.float16, low_cpu_mem_usage=True, quantization_config=diffusers.PipelineQuantizationConfig(quant_backend='bitsandbytes_4bit', quant_kwargs={'load_in_4bit':True, 'bnb_4bit_quant_type':'nf4', 'bnb_4bit_compute_dtype':torch.float16}, components_to_quantize=['transformer', 'text_encoder'])) qwen.scheduler = diffusers.FlowMatchEulerDiscreteScheduler.from_config({'base_image_seq_len':256, 'base_shift':math.log(3), 'invert_sigmas':False, 'max_image_seq_len':8192,...

# What does this PR do? Adds support for FlashPack FlashPack could be used as weight format only (see: `load_flashpack_checkpoint`) - keeping only weight format code would be a cleaner...

## πŸ€–UAA: Ulysses Anything Attention We have implemented the **[πŸ“šUAA: Ulysses Anything Attention](https://github.com/vipshop/cache-dit/blob/main/docs/User_Guide.md#uaa-ulysses-anything-attention)**: An Ulysses Attention that supports **arbitrary sequence length** with βœ…**zero padding** and **nearly βœ…zero theoretical communication overhead**....