GFPGAN
GFPGAN copied to clipboard
GFPGAN to TorchScript/TensorRT
Hello, I am trying to convert the GFPGAN model to TorchScript/TensorRT to increase model performance. Has there be made any efforts yet on this?
So far I made a successful conversion to onnx (including the StyleGAN Decoder) However the conversion to torchscript (or even just tracing) results in some errors of the StyleGAN Decoder part)
Hi, I successfully export trt with TensorRt 8.5, but onnx result != tensorrt result.
Did you meet this problem?
This my polygraph output.
@NingNanXin Hi, you can try this https://github.com/bychen7/Face-Restoration-TensorRT.
@NingNanXin https://github.com/bychen7/Face-Restoration-TensorRT This repo can be run on Windows 10?
@NingNanXin https://github.com/bychen7/Face-Restoration-TensorRT This repo can be run on Windows 10?
I think it is possible, and the code does not need to be changed. However, I have not tested it on Win10, and it would require replacing all dependencies with their Windows versions.
@NingNanXin Hi, you can try this https://github.com/bychen7/Face-Restoration-TensorRT.
Thanks for your work, I think you can push your work in Tensorrtx by wangxinyu, thus more people will benefit from your work. I test the work in ubuntu18.04, trt8.2, and got a good performance, but you didn't set fp16 precision, so all of your onnx keep fp32?
And you can try trt>8.5, it can deal with the two inputs of a convolution
@NingNanXin Hi, you can try this https://github.com/bychen7/Face-Restoration-TensorRT.
Thanks for your work, I think you can push your work in Tensorrtx by wangxinyu, thus more people will benefit from your work. I test the work in ubuntu18.04, trt8.2, and got a good performance, but you didn't set fp16 precision, so all of your onnx keep fp32?
And you can try trt>8.5, it can deal with the two inputs of a convolution
Thank you for your suggestion. The quality of fp16 generation may decrease. This work was done a year ago, and I reviewed the records from that time. Please refer to the following comparison results.
@bychen7 Yes, I convert GFPGAN to fp16 and got the same problem. Even if I retrain the model the problem cannot be solved. However, GPEN can right convert to fp16. Another problem do you find the trt infer speed is slower than torch when the resolution is 512*512. For example, GFPGAN jit model got an average 14ms/frame, but fp32 trt got an average 36ms/frame, my test device is RTX2080ti.
@NingNanXin did u solve it?
@debasishaimonk No, I try to keep some layers as fp32,others fp16, but trt which export by this way did not work
@bychen7 Yes, I convert GFPGAN to fp16 and got the same problem. Even if I retrain the model the problem cannot be solved. However, GPEN can right convert to fp16. Another problem do you find the trt infer speed is slower than torch when the resolution is 512*512. For example, GFPGAN jit model got an average 14ms/frame, but fp32 trt got an average 36ms/frame, my test device is RTX2080ti.
my device is laptop 4070, input[1, 3, 512, 512], inference fp16 got average 16~18ms/frame, fp32 about 42ms/frame. What's more, fp32 infer result is right, fp16 just got a black image. Did your fp16 result are normal?