IFRNet icon indicating copy to clipboard operation
IFRNet copied to clipboard

Video frame interpolation usage

Open Kupchanski opened this issue 3 years ago • 1 comments

Hello! Thanks for your work!

Can you, please, suggest the best way to use this model to interpolate video? Just take 2 neibour frames of video, infer them and then stitch new frames back?

Should model be retrained for each video or it can be used to interpolate any video with good quality?

Thank you in advance for your reply!

Kupchanski avatar Jun 28 '22 03:06 Kupchanski

Thanks for your interest.

Our pretrained models can already get relative good frame interpolation visual quality on common videos.

To get best visual quality on your specific videos, you can load the provided checkpoint, and then fine-tune IFRNet on your collected video datasets, which should contain sufficient quantity of frame sequences with diverse motion and texture.

The model do not need to be retrained for each video, but to be retrained on datasets with all these videos once. Then, you can get good frame interpolation quality with any video in the same domain of this training dataset. For training and inference, you can refer to train_vimeo90k.py, demo_2x.py for 2x interpolation and train_gopro.py, demo_8x.py for 8x interpolation.

ltkong218 avatar Jun 28 '22 11:06 ltkong218