Chaofeng Chen

Results 12 comments of Chaofeng Chen

Thanks for interest. Since I have no experience in the conversion, I am not able to give detailed instructions. You may try some official tutorials, for example: - https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html -...

The input width and length should be multiplier of 32(16) for x2(x4) model

224×224 should be ok. You may try 256×256, which should definitely work.

Here are the steps to visualize multiple codes: 1. Remove unused codes. VQGAN cannot make full use of all the codes in the codebook, and many of them are never...

Example visualization code is provided in https://github.com/chaofengc/FeMaSR/blob/main/vis_codebook.py

Thanks for reference, Zhou. I am the author of the `IQA-PyTorch` package. As explained by the official implementation of [pytorch-fid](https://github.com/mseitzer/pytorch-fid) > FID is a measure of similarity between two datasets...

Hi, I retrained the model with `Setting B` in the past two days and it works fine. With our generated test images, it reach the best PSNR/SSIM/LPIPS - `22.43/0.5909/0.3437` at...

The training images are generated with `degradation_bsrgan` and testing images are generated with `degradation_bsrgan_plus`, using the provided script `generate_dataset.py`. We did not make any changes to these codes. Please note...

No, resize at the beginning will further enlarge the degradation space. This might also be the problem in current online mode, you can try to set `use_resize_crop` to false when...

Thanks for your interest. Here are brief answers: 1. If you want to use your own trained `.pth` model, you may follow the following codes: ``` metric = pyiqa.create_metric('model_name') metric.net.load_state_dict('your/model/path.pth')...