ComfyUI-WanVideoWrapper icon indicating copy to clipboard operation
ComfyUI-WanVideoWrapper copied to clipboard

Wan 2.2 I2V Looping same image Start-End step - Color contrast increase over time

Open DeepSeaCatz opened this issue 2 months ago • 6 comments

I've been trying to solve this for the past 2 weeks. When I try to loop with the WanVideoSampler, it seems the generation tends to increase color contrast over time. I've tried changing

  1. Every scheduler (Euler, Unipc, lcm, dpm++)
  2. Color match nodes after vae
  3. Different vae models (2.1, bf16, fp32)
  4. Different sigma steps, shifts, cfg, steps
  5. Different High/Low models, including the fp8 you've created.
  6. Lightning, lightx2v 4 steps loras
  7. No loras
  8. etc.
Image

Creating non-loop video doesn't have this issue though, so it's precisely something with looping.

I found that my best option is to stay away from WanVideoWrapper nodes and use the regular KSamplers or SamplerCustomAdvanced by feeding the image to the Start_image/end_image to the native WanFunInpaintToVideo node. It does take longer to render though and results are less interesting than the WanVideoWrappers. Alternatively I was able to use Vid2Vid with Wan Vace to use the first/last frame of a non-loop video and make it loop, but it didn't sync up every time and took 10x longer to render.

The raw 16fps output has noticeable color contrast shifting: https://github.com/user-attachments/assets/de386398-f61b-4dfa-94ad-fddaa9bb34fc

When interpolating though, there are a lot less color contrast shifting, but still noticeable when it loops back. It also seems to start brighter than the input image: https://github.com/user-attachments/assets/be72cc76-4fe5-447a-9117-79194be45493

To make matters worse, I loaded an older Wan 2.1 workflow that I used to make loops with and with the exact same settings I found that it created gray flicker and ended the video almost fully gray.

Image

I am not sure what is going on. The only difference from back then was using an older Comfy version and perhaps older dated nodes as everything's up to date. I am starting to lose it, not sure what is the issue here. Every single settings I change give unexpected results. There aren't many people posting this issue online, but it is a thing and haven't found a solution yet for WanVideoWrapper.

Is it a python depedency that I might be too high/low? My pytorch is 2.7.0+cu128 with an rtx 3090.

DeepSeaCatz avatar Oct 24 '25 06:10 DeepSeaCatz

So after a month of testing, I finally figure out that the main issue is the default High/Low model that are not compatible with looping. They will add color burn/color contrast over time/color degradation over time/noise.

Only models I could find and tested that worked well is this DaSiWa model and the Smooth Mix model.

They are both merged model of various loras, including the Lightx2v 4steps. So you need to use 4 step in your Sampler and 1CFG. 1CFG won't understand negative prompt, so I'm currently trying to test a WanVideoNAG node between negative prompt and Sampler to see if it could understand it.

DaSiWa model has merged many NSFW models, but require very little prompting. Smooth Mix model requires lots of prompting.

I am not sure why these model works compare to use the same model + Loras used sperately, but at least it works.

DeepSeaCatz avatar Nov 12 '25 02:11 DeepSeaCatz

Out of curiosity, what are you trying to achieve? I ask because I use also wan 2.2 I2V but with core ComfyUI workflow. I use sometime first + last frame to video and I didn't remember to have last frame different in any way from what I gave it as input. It's true that I don't try to make looping videos and also I always use 4 steps Loras (and obviously 1.0 cfg because of that).

As a side note, one tip (if you are not already doing so): when you use wan 2.2 try to customize your sigmas in such a way to use for high noise model from 1.0 down to at least 0.9 or 0.85 and for the low noise model from 0.9 - 0.85 down to 0. I mean: 1.0 - 0.9 sigmas => high noise model. 0.9 - 0.0 sigmas => low noise model.

jovan2009 avatar Nov 12 '25 17:11 jovan2009

It's true that I don't try to make looping videos

The regular video generation works perfectly. The issue is creating a seamless loop where you use the same Image reference as First and Last input. Wan 2.1 had no issue with this, but Wan 2.2 incorporated a color degradation over time with the regular model.

Since the last frames of the loop are different from the first with this issue, the illusion of an infinite video doesn't work. Even at 3 seconds, this issue occures. It would be interesting to know what alternative checkpoint models could be used OR what additional models could be used with the default High/Low model for this fix. And also yes, I've tried customizing the sigmas and no success. Very strange.

DeepSeaCatz avatar Nov 12 '25 18:11 DeepSeaCatz

It's true that I don't try to make looping videos

The regular video generation works perfectly. The issue is creating a seamless loop where you use the same Image reference as First and Last input. Wan 2.1 had no issue with this, but Wan 2.2 incorporated a color degradation over time with the regular model.

Since the last frames of the loop are different from the first with this issue, the illusion of an infinite video doesn't work. Even at 3 seconds, this issue occures. It would be interesting to know what alternative checkpoint models could be used OR what additional models could be used with the default High/Low model for this fix. And also yes, I've tried customizing the sigmas and no success. Very strange.

My top of my head hypothesis is that the model doesn't "expect" to have the same image as both the first and the last. One workaround I can think of is to use another slightly different image for the last image then make another video with the images in the reversed order (first image in first video becomes the last in second video). Then you stitch (concatenate) the 2 videos together.

jovan2009 avatar Nov 12 '25 19:11 jovan2009

My top of my head hypothesis is that the model doesn't "expect" to have the same image as both the first and the last. One workaround I can think of is to use another slightly different image for the last image then make another video with the images in the reversed order (first image in first video becomes the last in second video). Then you stitch (concatenate) the 2 videos together.

Hmm interesting, I'll try that out. Maybe I could inpaint an area that I wish the motion to follow through and then re-loop back to first image. I wonder if it's just the fact of using the same load image node to feed both first/last frame that is the issue. Perhaps duplicating the image reference with a new name without editing the image could work. I'll do some tests, thanks.

DeepSeaCatz avatar Nov 12 '25 19:11 DeepSeaCatz

Perhaps duplicating the image reference with a new name without editing the image could work. I'll do some tests, thanks.

Changing the name but keeping the exact same image I predict will have no effect.

Edit: also I wouldn't bother with inpainting. I would choose a good image from the middle of your already done video that doesn't loop well and use that.

jovan2009 avatar Nov 12 '25 20:11 jovan2009