mmpose icon indicating copy to clipboard operation
mmpose copied to clipboard

GPU usage rate

Open oscarfossey opened this issue 2 years ago • 4 comments
trafficstars

Hello thanks for the great repo, very useful. I tried to benchmark the speed of different whole-body top-down models on my machine (Telsa-4, 16Gb VRAM). Everything works well but the demo script uses only 20% of my GPU. How to make the gpu usage rate higher ?

oscarfossey avatar Dec 30 '22 10:12 oscarfossey

Hi, you can try this tool for inference speed benchmarking: https://github.com/open-mmlab/mmpose/blob/master/tools/analysis/benchmark_inference.py. The demo script is meant to show the usage of inference interfaces and its efficiency may be limited by I/O or visualization.

ly015 avatar Feb 03 '23 11:02 ly015

I manage to benchmark speed thanks. My question was about how to increase the usage rate? Is there something like a batch_size somewhere? I speak about inferences times not visualization.

oscarfossey avatar Feb 03 '23 15:02 oscarfossey

This is a relevant question for me too.

colt18 avatar Apr 19 '23 22:04 colt18

hi @oscarfossey , @colt18 , if you are using demo scripts there is a inference related function which takes image (or images) as input. For example in top_down_img_demo.py script, the function inference_top_down_pose_model takes image input. If you (can) somehow provide list of image paths (or numpy arrays) then demo use them as batch inputs. Not sure if demo script has this feature or not by default.

In case you use configs I guess you know that there is a batch_size parameter.

tojimahammatov avatar Jul 21 '23 05:07 tojimahammatov