ys-li
ys-li
Yes, you should build a custom dataset to support REDS in lmdb format. Besides, video data in lmdb format will be supported in V1.0
Has the code been modified?
We are sorry that FLAVR has not supported the VFI demo now.
We are going to support it in #954
mid_channels=64, num_blocks=7, 8 GPUs, batch_size=1 in each of them. 600k items trained in REDS, x4.
We can modify a model to support x2 VSR or reproduction another x2 VSR model together.
@Ashore-lz , please kindly post the error message
How about using 1 GPU or a smaller batch_size?
Maybe you can format the struct of the checkpoint file.
Or only load `state_dict` from the pre-trained checkpoint file. By `torch.load('FILE_PATH')['state_dict']`