Text-To-Video-Finetuning icon indicating copy to clipboard operation
Text-To-Video-Finetuning copied to clipboard

Add InfiNet module for DiffusionOverDiffusion training to allow for extremely (minutes!) long video creation

Open kabachuha opened this issue 2 years ago • 12 comments
trafficstars

Hi, Exponential-ML!

As you probably know, a bit more than a week ago, Microsoft published their paper where they described the novel DiffusionOverDiffusion technique https://arxiv.org/abs/2303.12346 working by firstly outlining the coarse keyframes and then picking a pair of them as starting points and filling in-betweens (with different, more local prompts!)

image

Using it they were able to tune on and create whole 11 minutes long Flintstones episodes https://www.reddit.com/r/StableDiffusion/comments/11zwaxx/microsofts_nuwaxl_creates_an_11_minute/

Seeing their impressive results, I couldn't have restrained myself from trying to replicate them.

Having read the article, I noticed that the model structure is extremely similar to the ModelScope one, and the only difference is the 'video conditioning' layer (in green), which information is being transferred into the preexisting U-net3D by a set of Conv-down cells.

image

Thanks to them using so called zero-convolutions I realized that layer as a ControlNet-like network https://github.com/kabachuha/InfiNet, with which it is possible to introduce the new layers without altering the work of the existing model. (See DoDBlock in the code)

image

I already tested the inference with diffusion_depth=0 and diffusion_depth=1 (any diffusion_depth>0 turns on the DoD-blocks), so when inferring the model definitely works

image

I'll start training experiments as soon as I'll figure out the dataset and the system requirements for it

P.S. @ExponentialML, contact me on Discord. I'd really appreciate more close communications

kabachuha avatar Apr 02 '23 11:04 kabachuha

This is great @kabachuha! Thanks for this PR, and sure we can get in touch.

ExponentialML avatar Apr 02 '23 19:04 ExponentialML

@kabachuha thanks for your contribution! I agree would be nice to have a discord server or channel about txt2video showcase and tech discuss. I'll ping you there.

sergiobr avatar Apr 02 '23 23:04 sergiobr

@sergiobr hi, we have a some sort of a text2vodeo team on the Deforum discord server, join it :) https://discord.gg/deforum

kabachuha avatar Apr 02 '23 23:04 kabachuha

@ExponentialML training works, btw

kabachuha avatar Apr 09 '23 09:04 kabachuha

@ExponentialML training works, btw

Great! Let me know if any you need any assistance getting things up to speed with the new repository changes.

ExponentialML avatar Apr 09 '23 20:04 ExponentialML

Yeah, I'd really appreciate help in carrying it over, since you know much better about the mainline changes

kabachuha avatar Apr 09 '23 22:04 kabachuha

Yeah, I'd really appreciate help in carrying it over, since you know much better about the mainline changes

By all means. Just let me know when it's ready to merge. If you don't want to resolve the conflicts yourself, I'm more than willing to do it :+1: .

ExponentialML avatar Apr 10 '23 02:04 ExponentialML

Now sampling to a video folder dataset is working correctly

image

kabachuha avatar Apr 22 '23 13:04 kabachuha

bump bump

IIIIIIIllllllllIIIII avatar May 24 '23 19:05 IIIIIIIllllllllIIIII

So, I'm going to write an automatic DoD captioner using OpenAI's (or other LLM provider, maybe local oobabooga).

How it will work:

  1. Multilevel DoD-splitting is done with the current script
  2. The lowest level subclips are captioned with BLIP2 (see @ExponentialML's repo)
  3. The LLM forms the upper level descriptions given just one global prompt for the whole video

It eliminates the difficulty of forming the mid-level captions

kabachuha avatar May 29 '23 15:05 kabachuha

sooo any updates on this?

Maki9009 avatar Jun 25 '23 15:06 Maki9009

bump bump

IIIIIIIllllllllIIIII avatar Jun 26 '23 06:06 IIIIIIIllllllllIIIII