sd-webui-deforum
sd-webui-deforum copied to clipboard
[Feature Request]: Allow for selection of video used for color coherence.
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What would your feature do ?
Currently, the Color Coherence
open of Video Input
uses the video defined in Init -> Video Init
, but it would be beneficial to have an additional field called Alt Video Init
that allows the user to define which video to use. In this way, the user had a lot more control over the color coherence.
Proposed workflow
Adding a video file name field under Color Coherence
when Alt Video Input
is selected
Additional information
This could be implemented by extracting the frames from a video to subfolder called "colorframes" if Alt Video Init
is activated, and then in render.py
there would be added similar code to what exists now:
if anim_args.color_coherence == 'Alt Video Input' and hybrid_available:
if int(frame_idx) % int(anim_args.color_coherence_video_every_N_frames) == 0:
prev_vid_img = Image.open(os.path.join(args.outdir, 'colorframes', get_frame_name(anim_args.video_init_path) + f"{frame_idx:09}.jpg"))
prev_vid_img = prev_vid_img.resize((args.W, args.H), PIL.Image.LANCZOS)
color_match_sample = np.asarray(prev_vid_img)
color_match_sample = cv2.cvtColor(color_match_sample, cv2.COLOR_RGB2BGR)
The number of frames generated in colorframes
would be equal to the number of frames in inputframes
, but the user could a video of any length, and the extracting code could easily calculate the correct number of frames via interpolation. expansion, and decimation.
Are you going to help adding it?
I am happy to contribute to the degree I am capable of. I have looked at the code, and I have more questions than understanding about it, but this task seems simple enough for me to attempt if I get a few pointers.
My discord ID is 1064991849149898822
Is anyone still working on this? I'd really love using this feature! Currently I have to do color transitions for coherence init images manually (which means stopping the clip and resuming with a different init a couple of times to get smooth changes, and this barely works with cadence >1).