Please support Qwen Image Edit 2511
Feature Idea
Model https://huggingface.co/Qwen/Qwen-Image-Edit-2511
Lighting Lora https://huggingface.co/lightx2v/Qwen-Image-Edit-2511-Lightning
Existing Solutions
No response
Other
No response
GGUF: https://huggingface.co/unsloth/Qwen-Image-Edit-2511-GGUF/tree/main
GGUF: https://huggingface.co/unsloth/Qwen-Image-Edit-2511-GGUF/tree/main
Unsloth has image generation models too?
doesn't it already just work though?
Using pytorch attention in VAE !!! Exception during processing !!! Error(s) in loading state_dict for AutoencoderKL: size mismatch for encoder.conv_in.weight: copying a param with shape torch.Size([96, 3, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 3, 3, 3]). size mismatch for encoder.conv_in.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for encoder.conv_out.weight: copying a param with shape torch.Size([32, 384, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 512, 3, 3]). size mismatch for decoder.conv_in.weight: copying a param with shape torch.Size([384, 16, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 16, 3, 3]). size mismatch for decoder.conv_in.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for decoder.conv_out.weight: copying a param with shape torch.Size([3, 96, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 128, 3, 3]). size mismatch for quant_conv.weight: copying a param with shape torch.Size([32, 32, 1, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 32, 1, 1]). size mismatch for post_quant_conv.weight: copying a param with shape torch.Size([16, 16, 1, 1, 1]) from checkpoint, the shape in current model is torch.Size([16, 16, 1, 1]). Traceback (most recent call last): File "/home/j/ComfyUI/execution.py", line 516, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/j/ComfyUI/execution.py", line 330, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/j/ComfyUI/execution.py", line 304, in _async_map_node_over_list await process_inputs(input_dict, i) File "/home/j/ComfyUI/execution.py", line 292, in process_inputs result = f(**inputs) ^^^^^^^^^^^ File "/home/j/ComfyUI/nodes.py", line 797, in load_vae vae = comfy.sd.VAE(sd=sd) ^^^^^^^^^^^^^^^^^^^ File "/home/j/ComfyUI/comfy/sd.py", line 665, in init m, u = self.first_stage_model.load_state_dict(sd, strict=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/j/anaconda3/envs/comfy/lib/python3.12/site-packages/torch/nn/modules/module.py", line 2629, in load_state_dict raise RuntimeError( RuntimeError: Error(s) in loading state_dict for AutoencoderKL: size mismatch for encoder.conv_in.weight: copying a param with shape torch.Size([96, 3, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 3, 3, 3]). size mismatch for encoder.conv_in.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for encoder.conv_out.weight: copying a param with shape torch.Size([32, 384, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 512, 3, 3]). size mismatch for decoder.conv_in.weight: copying a param with shape torch.Size([384, 16, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 16, 3, 3]). size mismatch for decoder.conv_in.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for decoder.conv_out.weight: copying a param with shape torch.Size([3, 96, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 128, 3, 3]). size mismatch for quant_conv.weight: copying a param with shape torch.Size([32, 32, 1, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 32, 1, 1]). size mismatch for post_quant_conv.weight: copying a param with shape torch.Size([16, 16, 1, 1, 1]) from checkpoint, the shape in current model is torch.Size([16, 16, 1, 1]).
Prompt executed in 0.01 seconds
With the VAE from the qie2511 Huggingface
@JMLLR1 You need to use the vae from here: https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/split_files/vae
It’s the same for all Qwen image models.
That gives me only noise in the output
I noticed that the output using GGUF is causing oversaturation in my images. I checked the VAE but it’s the same as 2509 (same hash), so it must be something else. I don’t know if it's the quantisation, because I can't run the full model on my setup but it isn’t a problem with 2509.
@JMLLR1
That gives me only noise in the output
Have you updated Comfy to the latest version? It works for me, except for the saturation problem.
Edit:
Add the "Edit Model Reference Method" node with "index_timestep_zero" to fix quality issues.
https://www.reddit.com/r/StableDiffusion/s/MJMvv5vPib
qwen_image_edit_2511_fp8_e4m3fn_scaled_lightning.safetensors | FP8 Quantized | FP8 (e4m3fn scaled) precision, fused with 4-step distilled LoRA, optimized for low-memory deployment it not work on comfyui
qwen_image_edit_2511_fp8_e4m3fn_scaled_lightning.safetensors - that was my Problem, Downloading another fp8 right now
Add the "Edit Model Reference Method" node with "index_timestep_zero" to fix quality issues.
This works great, thanks!
(using the unsloth Q4_K_M gguf + bf16 lightning lora)
qwen_image_edit_2511_fp8_e4m3fn_scaled_lightning.safetensors - that was my Problem, Downloading another fp8 right now
another is not good i alredy try it
Why do some people say that the diffusers version also affects the results of version 2511?
I noticed that the output using GGUF is causing oversaturation in my images. I checked the VAE but it’s the same as 2509 (same hash), so it must be something else. I don’t know if it's the quantisation, because I can't run the full model on my setup but it isn’t a problem with 2509.
That gives me only noise in the output
Have you updated Comfy to the latest version? It works for me, except for the saturation problem.
Edit:
Add the "Edit Model Reference Method" node with "index_timestep_zero" to fix quality issues.
https://www.reddit.com/r/StableDiffusion/s/MJMvv5vPib
May I ask which GitHub repo contains the "Edit Model Reference Method" node ? thx.
@markbex ComfyUI has released a new workflow for 2511: https://blog.comfy.org/p/qwen-image-edit-2511-and-qwen-image
The "Edit Model Reference Method" is just a renamed node(as in when you double-click on a node and set a custom name for them). It's actually this node:
Hello, What is the meaning of "fp8mixed" in: https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI/tree/main/split_files/diffusion_models qwen_image_edit_2511_fp8mixed.safetensors
Also, what is different about this "_comfyui" file: https://huggingface.co/lightx2v/Qwen-Image-Edit-2511-Lightning/tree/main qwen_image_edit_2511_fp8_e4m3fn_scaled_lightning_comfyui.safetensors
They are both 20GB, and have no idea what to use now.
Just tried qwen_image_edit_2511_fp8mixed, and it gave an image of static garbage. Used the Comfy Org workflow posted above too.