video-diffusion-pytorch icon indicating copy to clipboard operation
video-diffusion-pytorch copied to clipboard

Noisy output & "text_use_bert_cls" error

Open GoutamKelam opened this issue 1 year ago • 31 comments

The "name text_use_bert_cls is not defined" error occurs when trying to use explicit texts as mentioned in the 3rd example. The error occurs as the variable is not directly linked to the class in the function "p_losses". On fixing that, when I ran the code, the output samples generated are random noise. I ran the inference for 1K and 50K steps respectively. Can you please guide if I am missing any step.

Attaching the output generated.

1 .

GoutamKelam avatar Jul 13 '22 06:07 GoutamKelam

What dataset are you training it on?

oxjohanndiep avatar Jul 19 '22 14:07 oxjohanndiep

I'm getting noisy input too running provided example (see below). Or is there are some pre-training need to be done?

import torch
from video_diffusion_pytorch import Unet3D, GaussianDiffusion

model = Unet3D(
    dim = 64,
    use_bert_text_cond = True,  # this must be set to True to auto-use the bert model dimensions
    dim_mults = (1, 2, 4, 8),
)

diffusion = GaussianDiffusion(
    model,
    image_size = 32,    # height and width of frames
    num_frames = 5,     # number of video frames
    timesteps = 1000,   # number of steps
    loss_type = 'l1'    # L1 or L2
)

videos = torch.randn(3, 3, 5, 32, 32) # video (batch, channels, frames, height, width)

text = [
    'a whale breaching from afar',
    'young girl blowing out candles on her birthday cake',
    'fireworks with blue and green sparkles'
]

loss = diffusion(videos, cond = text)
loss.backward()
# after a lot of training

sampled_videos = diffusion.sample(cond = text, cond_scale = 2)
sampled_videos.shape # (3, 3, 5, 32, 32)

DaddyWesker avatar Jul 27 '22 08:07 DaddyWesker

@DaddyWesker Obviously, you literally just provide with videos = torch.randn(3, 3, 5, 32, 32) noisy video data as an input to training.

oxjohanndiep avatar Jul 27 '22 08:07 oxjohanndiep

@oxjohanndiep

Hm, I'm just launching provided code. What kind of video should i provide then? I can't see any info in the README about that.

DaddyWesker avatar Jul 27 '22 08:07 DaddyWesker

You can try using the moving MNIST, what I also tried was the MSR-VTT dataset to test the training with annotations as well.

oxjohanndiep avatar Jul 27 '22 08:07 oxjohanndiep

Should this video have some correlation with text? FOr example, if you are saying that moving MNIST could be used, text should look like "moving digit five" or something like that?

DaddyWesker avatar Jul 27 '22 08:07 DaddyWesker

Yes, but I have not found any annotations out there for the moving MNIST one, hence I only trained it without.

oxjohanndiep avatar Jul 27 '22 08:07 oxjohanndiep

If you have found anything in this area, let me know.

oxjohanndiep avatar Jul 27 '22 09:07 oxjohanndiep

Okay. I will.

DaddyWesker avatar Jul 27 '22 09:07 DaddyWesker

@oxjohanndiep How long have you trained on moving mnist this diffusion model and have you got any reasonable results?

DaddyWesker avatar Jul 29 '22 06:07 DaddyWesker

I trained it for maybe 100 epochs which took me good 10h with CUDA enabled. No I did not got any good results, but maybe we can have a video chat to discuss this if you want.

oxjohanndiep avatar Jul 29 '22 06:07 oxjohanndiep

@DaddyWesker

Screenshot 2022-07-29 082009

oxjohanndiep avatar Jul 29 '22 06:07 oxjohanndiep

Hm. I haven't seen some of those parameters in training code in README and in Trainer class. I guess you wrote your own trainer?

DaddyWesker avatar Jul 29 '22 06:07 DaddyWesker

Yes I did, do you get other results with the Trainer class?

oxjohanndiep avatar Jul 29 '22 06:07 oxjohanndiep

I'm currently trying to train this model using trainer. When i'll get some results - i'll let you know

DaddyWesker avatar Jul 29 '22 06:07 DaddyWesker

Awesome

oxjohanndiep avatar Jul 29 '22 06:07 oxjohanndiep

36

70

Currently, model is being trained. Here are some results. First one on 36000 epoch, second one is on 70000 epoch. Not sure if those results are good or not.

DaddyWesker avatar Aug 04 '22 06:08 DaddyWesker

How long did you train it for in terms of time?

oxjohanndiep avatar Aug 04 '22 08:08 oxjohanndiep

That looks amazing!

oxjohanndiep avatar Aug 04 '22 08:08 oxjohanndiep

Several days on 1080ti gpu. From monday till today.

DaddyWesker avatar Aug 04 '22 08:08 DaddyWesker

Thats very interesting, I have never trained it for so long, max only around 6 hours! Will give it a go!

oxjohanndiep avatar Aug 04 '22 08:08 oxjohanndiep

Btw it does look like per video, you have more than 5 frames. Did you increase the number of frames accepted by the model as well?

oxjohanndiep avatar Aug 04 '22 08:08 oxjohanndiep

20 frames as i remember. As in moving mnist samples. Though i can use batch_size = 1 only =)

Here is the parameters I've changed

diffusion = GaussianDiffusion(
    model,
    image_size = 64,
    num_frames = 20,
    timesteps = 1000,   # number of steps
    loss_type = 'l1'    # L1 or L2
).cuda()

And batch_size in trainer of course.

DaddyWesker avatar Aug 04 '22 08:08 DaddyWesker

Alright, let me increase the frame number as well and give it a go. Report you the results in a couple of days!

oxjohanndiep avatar Aug 04 '22 09:08 oxjohanndiep

@DaddyWesker How did you plot those little GIFs of the results actually?

oxjohanndiep avatar Aug 04 '22 17:08 oxjohanndiep

@DaddyWesker And have you tried testing it on a more sophisticated dataset, i.e. Kinetic-600 with their text annotation? Would be very interesting to see how the results are conditioned on text.

oxjohanndiep avatar Aug 04 '22 17:08 oxjohanndiep

No i haven't tested on different dataset. I'll see if i will have enough time for this.

About gifs. In this repo in video_diffusion_pytorch/video_diffusion_pytorch.py lies function


def video_tensor_to_gif(tensor, path, duration = 120, loop = 0, optimize = True):
    images = map(T.ToPILImage(), tensor.unbind(dim = 1))
    first_img, *rest_imgs = images
    first_img.save(path, save_all = True, append_images = rest_imgs, duration = duration, loop = loop, optimize = optimize)
    return images

I'm using this one. It saves gif and returns it as ndarray output (if you somehow need it)

DaddyWesker avatar Aug 05 '22 06:08 DaddyWesker

@DaddyWesker Have to admit, your results looks far better than mine:

Screenshot 2022-08-09 050924

This took me 3 days to train, and I only got 1000 epochs. How were you able to run 70k epochs? And what kind of learning-rate did you choose?

oxjohanndiep avatar Aug 09 '22 03:08 oxjohanndiep

train_lr = 1e-4

Well, i don't know what to say about "how was i able to train 70k epochs". I've just ran training code from README on mnist. Nothing special.

DaddyWesker avatar Aug 09 '22 04:08 DaddyWesker

import torch
from video_diffusion_pytorch import Unet3D, GaussianDiffusion, Trainer

def video_tensor_to_gif(tensor, path, duration = 120, loop = 0, optimize = True):
    images = map(T.ToPILImage(), tensor.unbind(dim = 1))
    first_img, *rest_imgs = images
    first_img.save(path, save_all = True, append_images = rest_imgs, duration = duration, loop = loop, optimize = optimize)
    return images

model = Unet3D(
    dim = 64,
    dim_mults = (1, 2, 4, 8),
)

diffusion = GaussianDiffusion(
    model,
    image_size = 64,
    num_frames = 20,
    timesteps = 1000,   # number of steps
    loss_type = 'l1'    # L1 or L2
).cuda()

trainer = Trainer(
    diffusion,
    './data',                         # this folder path needs to contain all your training data, as .gif files, of correct image size and number of frames
    train_batch_size = 1,
    train_lr = 1e-4,
    save_and_sample_every = 1000,
    train_num_steps = 700000,         # total training steps
    gradient_accumulate_every = 2,    # gradient accumulation steps
    ema_decay = 0.995,                # exponential moving average decay
    amp = True                        # turn on mixed precision
)



trainer.train()

sampled_videos = diffusion.sample(batch_size = 4)
u_sampled_videos = sampled_videos.unbind(dim = 1)
for i in range(len(u_sampled_videos)):
    images = video_tensor_to_gif(u_sampled_videos[i], "result_"+str(i)+".gif")

DaddyWesker avatar Aug 09 '22 04:08 DaddyWesker