sunwei925
sunwei925
Please use the KoNViD-1k videos downloaded from the link https://drive.google.com/file/d/1p-6FJR6pPa9Fh-Va995t0Ycb1Dq9hBJG/view?usp=sharing.
The video names in other datasets match the video names in the corresponding datasets that are downloaded in public.
For LIVE-VQC, you can write your custom script. It is easy and you only need to provide the video names and the corresponding MOSs.
the same problem
It is right. Since we only use PLCC loss to optimize the model, the trained model will aim to achieve a high PLCC value between the model outputs and the...
I also want to use the InternVid-Aesthetics-18M dataset.