rtmlib icon indicating copy to clipboard operation
rtmlib copied to clipboard

rtmo onnx running time

Open fengyurenpingsheng opened this issue 11 months ago • 5 comments
trafficstars

Thanks for your great work. We tested the RTMO (small model) with ONNX on a 4090, and the inference time was approximately 50ms. However, this is significantly different from the results provided in your paper for the V100. In theory, it should be faster than the V100. We tested one image at a time; did you test with a batch of 8 images and then calculate the average time?

fengyurenpingsheng avatar Nov 22 '24 23:11 fengyurenpingsheng

I may not be the right person to answer, but these results are probably given for images that have already been loaded and kept in memory as arrays. Loading an image with OpenCV (which you probably use) takes over 30 ms. And if you display it, save results, etc, this would take even longer.

davidpagnon avatar Dec 01 '24 00:12 davidpagnon

Hi @fengyurenpingsheng , all models in the RTM series are deployed and tested through mmdeploy, which provides a model inference speed testing tool that strictly adheres to scientific research standards. I speculate that there might be some details overlooked in your profiling script, leading to discrepancies in your results compared to ours.

Tau-J avatar Dec 03 '24 03:12 Tau-J

@davidpagnon Thank you for helping me respond to the issue. Due to being occupied with LLM-related work, I hardly have time to study pose estimation. My response time to issues might be a little bit long. I greatly appreciate your enthusiastic help.

Tau-J avatar Dec 03 '24 03:12 Tau-J

@Tau-J Thanks for your warm response. could you share me how to convert the this rtmo pytorch code to onnx code without using some libraries from mmpose. Just use some codes from this projects?

fengyurenpingsheng avatar Jan 10 '25 08:01 fengyurenpingsheng

@fengyurenpingsheng: Here is the script to deploy your model to .onnx (but you need to install mmdeploy) https://mmpose.readthedocs.io/en/latest/user_guides/how_to_deploy.html#model-conversion

Also, there may be more to the loss of speed you notice. @ANaaim reported a similar outcome, and it seems like it is documented multiple times that you may lose speed when converting from .pth to .onnx. The silver lining is that it seems like it can be fixed, although I haven't dug deep into it. https://github.com/microsoft/onnxruntime/issues/10303#issuecomment-1260015826

davidpagnon avatar May 04 '25 21:05 davidpagnon