diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

Fix: Normalize batch inputs to 5D tensors for Qwen-Image-Edit

Open akshan-main opened this issue 1 month ago • 3 comments

What does this PR do?

Enables robust batch inference for QwenImageEditPlusPipeline by normalizing input tensor shapes, implementing a "resize" strategy, and handling variable-length prompt embeddings. Adds a new batch inference capability that did not exist previously. Also accepts list, tuples as input now, which didn't exist previously. Enables QwenImageEditPlusPipeline to be used for parallel production workflows.

Description

Addresses issue #12458.

I identified four blockers preventing batch inference in the current pipeline:

  1. 5D Tensor Requirement: The underlying VAE for Qwen2-VL model treats inputs as video (B, C, F, H, W) when batched. The pipeline was passing 4D tensors (B, C, H, W), causing immediate shape mismatches.

    • Fix: Added a pre-processing step to explicitly handle the frame dimension for static images when batch_size > 1.
  2. Tensors require images in a batch to be of equal size

    • Fix: Implemented a Resize Strategy.
      • Single/Uniform Images: Preserves original aspect ratio and resolution (rounded to nearest 32).
      • Mixed Batches: Forces images to a standard resolution (e.g., 1024x1024) or User Defined height/width(first priority) to ensure tensor stackability without padding artifacts.[removed padding methodology from my previous commit and went ahead with upscaling/ resizing.
  3. Tokenizer Batching Issues: The Qwen2VLProcessor produces variable-length embeddings for different prompts, which caused RuntimeError or IndexError when trying to batch encode them directly.

    • Fix: Refactored encode_prompt to process prompts individually in a loop, then pad the resulting embeddings up to the maximum sequence length in the batch before concatenating.
  4. The pipeline would crash if users accidentally passed a tuple of images.

    • Fix: Added _sanitize_images which recursively unwraps inputs into a clean list.

Note on Batching Logic

To resolve the ambiguity between "Multi-Image Conditioning" and "Batch Inference", I implemented the following routing logic in encode_prompt:

  1. Single String Prompt (prompt="string"):

    • Behavior: Joint Condition. The pipeline treats all provided images as a single context for one generation task.
    • Use Case: Style transfer or merging elements from multiple reference images.
  2. List of Prompts (prompt=["s1", "s2"]):

    • Behavior: Parallel Batch. The pipeline maps images to prompts 1-to-1.
    • Use Case: Processing a dataset (e.g., editing 50 different images with 50 different instructions at once).

Fixes #12458

Before submitting

  • [x] This PR fixes a typo or improves the docs.
  • [x] Did you read the contributor guideline?
  • [x] Did you read our philosophy doc?
  • [x] Was this discussed/approved via a GitHub issue? (Issue #12458)
  • [ ] Did you make sure to update the documentation with your changes?
  • [x] Did you write any new necessary tests? (Verified via reproduction script)

Who can review?

@yiyixuxu @sayakpaul @DN6

akshan-main avatar Nov 22 '25 04:11 akshan-main

Hey @sayakpaul @yiyixuxu, let me know if I have to make any changes. But the functionality works as intended!

akshan-main avatar Nov 28 '25 03:11 akshan-main

the mask output from encode_prompt is not used in attention calculation https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_qwenimage.py#L338 (it's always None here)

we will not move forward with this PR. We have a PR address a similar issue https://github.com/huggingface/diffusers/pull/12702 and we will focus on that instead:)

yiyixuxu avatar Dec 08 '25 09:12 yiyixuxu

Thanks for the clarification, that helps.

Edit: Regarding #12702: that PR fixes variable prompt-length handling inside the transformer, but it does not by itself turn QwenImageEditPlusPipeline into the dataset-style batch API feature requested in #12458, where image[i] is edited with prompt[i] in parallel. This PR is what adds that behavior at the pipeline level by introducing 1:1 image-prompt routing, tuple/list input sanitization, and a concrete strategy for mixed-resolution images so batches can actually be stacked without errors. In that sense, I believe #12702 solves the internal masking issue, while this PR is still required if the goal is to support the practical batched-inference workflow that originally motivated #12458. This PR address the requested feature addition.

akshan-main avatar Dec 08 '25 10:12 akshan-main