RealBasicVSR icon indicating copy to clipboard operation
RealBasicVSR copied to clipboard

Official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution"

Results 49 RealBasicVSR issues
Sort by recently updated
recently updated
newest added

Because RealBasicVSR is evaluate by non-reference metric (NIQE, PI), what is the GT or loss used for training RealBasicVSR?

I have a low-quality, low-resolution video, such as a CCTV camera recording in a dark and blurry environment. After processing it with a pre-trained model, the quality of the output...

Thanks for your excellent work, I've got a problem when training with only one GPU, could you please give me some guidance on non-distributed learning commands, thank you. THE logs...

Why put the code in such a poor framework? If the framework is changed today and tomorrow, the model will not run at all.

Hi, Does the released pretrained model support x2 superresolution?

The training completes the first stage,when tested on REDS data set, abnormal color appears, GT: ![00000000](https://github.com/ckkelvinchan/RealBasicVSR/assets/44860428/192ebf93-f211-4748-80ef-0e2890f2c5c8) SR image: ![00000000](https://github.com/ckkelvinchan/RealBasicVSR/assets/44860428/e4b0a951-40d0-4b6b-8d42-f821d35249f2) please,in first stage,have you ever been in the same situation?

In VideoLQ datasets, there are only low-quality LQ images,none HR images. How can it be used as a test?

I want to train x2 model. So, I revised code of 'realbasicvsr_wogon_c~.py' in configs folder. Specifically, I changed scale parameter from 4 to 2 and tried training. Inference was made...

Nothing too complicated, updated inference code to work out of the box for mmagic + mmcv2.0 since the MMinferencer from mmagic wasn't really working as intended out of the box....

Hi! I notice that the training process only needs the train_sharp videos in REDS dataset, the low-quality video inputs are automatically generated in the training pipeline. My question is how...