DifFace
DifFace copied to clipboard
Could you release your metric calculation script please?
Your work is awesome! I have tested with your pre-trained model on CelebaTest and got amazing visual results.
However, I noticed that your paper and the VQFR's paper both provide metrics for testing VQFR on CelebaTest, and these metrics are different.
Thus, I used your model to infer on the CelebaTest dataset provided by VQFR link here and use the calculation script provided by VQFR link here, the unexpected results were obtained.
Therefore, I am very curious about the quantitative metrics mentioned in the paper. How do you calculate the metrics? Could you release your metric calculation script please?
For PSNR and LPIPS: https://github.com/chaofengc/IQA-PyTorch
For LPIPS (VGG): https://github.com/richzhang/PerceptualSimilarity
For IDS: see the script of VQFR.
For FID: https://github.com/mseitzer/pytorch-fid
As for as I know, for the metric of FID, VQFR and GFPGAN are calculated between the restored faces and the whole faces in FFHQ. However, in our paper, it is directly calculated between the restored faces and the corresponding ground truth.
Thanks for your scripts~