EDSR-PyTorch
EDSR-PyTorch copied to clipboard
Running speed.
Can you share your running speed when inferring, for example, upscale a 1920 * 1080 img to 3840 * 2160? @thstkdgus35 @yulunzhang
Hello @xiaozhi2015,
Actually, it is not straightforward to measure the running speed, because it depends on
-
CUDA initialization time
-
GPU API call overheads
-
Overheads from reading/writing large images from a disk
Therefore, the running time can vary depending on your situation.
In general, if you try the code over >100 images and average the running time, you can find an approximate running speed.
I will also try that and report to you if possible.
Thank you!
Much thx for your reply @thstkdgus35. What I want to know is the pure inferring time(exclude model initialization, read img, save img). Looking forward to your report.
Hello @xiaozhi2015,
Actually, it is not straightforward to measure the running speed, because it depends on
- CUDA initialization time
- GPU API call overheads
- Overheads from reading/writing large images from a disk
Therefore, the running time can vary depending on your situation.
In general, if you try the code over >100 images and average the running time, you can find an approximate running speed.
I will also try that and report to you if possible.
Thank you! hi @sanghyun-son i facing the problem at https://github.com/sanghyun-son/EDSR-PyTorch/issues/319#issue-996722398 could you help me?