Real-ESRGAN icon indicating copy to clipboard operation
Real-ESRGAN copied to clipboard

Slow ONNX inference

Open airogachev opened this issue 2 years ago • 1 comments

After the "default" model was converted into ONNX format its inference speed on GPU decreased nearly 4x times. Are there any solution or updates related to onnx inference?

airogachev avatar May 31 '22 08:05 airogachev

I dont really know but as a hobbyist I can suggest to check if the code maybe copies/creates again the ONNX graph multiple times, and maybe that can be alleviated somehow. Maybe you can reuse the ONNX operations etc. And check the default parameters, maybe you need to provide some inference params

Adamage avatar Jun 02 '22 07:06 Adamage

I met same issue! I used default pytorch2onnx.py script in repos. Then I got the onnx model but the inference speed is decreased (from 0.13s-0.21s).

haobo724 avatar Jul 11 '23 12:07 haobo724