XVFI icon indicating copy to clipboard operation
XVFI copied to clipboard

Very inefficient inference

Open n00mkrad opened this issue 2 years ago • 1 comments

Hello, the inference code seems to have rather severe bottlenecks - The CUDA usage is only around 25%.

image

RIFE and other interpolation networks usually have a usage of 80-95%.

Are any optimizations planned to reduce this overhead?

n00mkrad avatar Aug 25 '21 14:08 n00mkrad

Hi. @n00mkrad How was the inference time in your experiment compared to the inference time reported in our paper? Was it valid level?

Since every interpolated frame is saved during test phase, CUDA usage could be lower during test phase than training due to data input/output process. In addition, XVFI-Net has light weight parameters so that data I/O process is not negligible, which may also reduce the CUDA usage (We have experienced this, even data pipeline and the network feedforward are in parallel).

Although we have not planned to reduce the overhead, we will consider your suggestions if you suggest a possible way to do so.

hjSim avatar Aug 27 '21 05:08 hjSim