GFPGAN-onnxruntime-demo
GFPGAN-onnxruntime-demo copied to clipboard
This is the onnxruntime inference code for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior (CVPR 2021). Official code: https://github.com/TencentARC/GFPGAN
I have used your script and converted gfpgan pth model into onnx model but I am getting horrible results. Like output image is same as input image (in terms of...
Batching
I will release a PR which supports the batching of the model via ONNX. I have a protoype working.
Such a great work. I am looking for quicker inference for GFPGAN. Is there any chance we can load the GFPGAN.onnx and run it on TensorRT? Looking forward to your...
在GPU环境下,使用onnxruntime的时间和PyTorch基本是一致的。
我按照上述步骤成功将1.4和1.3版本的模型导出成onnx模型了,经过我测试,似乎只在输入的图像仅包含头部的时候效果最好。我有个想法,将一张全身照的图片的头部截下来,调整尺寸到512后运行得到生成的图像,再将图像调整回原来的尺寸后放会原来裁剪的位置,但这种方式会导致裁剪出的图像边缘有条纹,结果如下所示:**请问这种问题该如何解决**?    
nice work! i run at 4090,the infer time is: 0.6s,a little long. Is this the normal effect?
Hi, is there any chance I can do batch inference with ONNX model, I notice the example is do single imge inference. Does the onnx model generated from torch2onnx.py support...
I have noted a quality decrease when comparing the results to GFPGAN 1.4 on Replicate. Any reasons why that could be happening?
Running Ubuntu 22.04 LTS I get the following error: `Traceback (most recent call last): File "/home/user/GFPGAN-onnxruntime-demo/torch2onnx.py", line 67, in ort_session = onnxruntime.InferenceSession(onnx_model_path) File "/home/user/miniconda3/envs/richard-roop/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 396, in __init__ raise e...
great work 😊