mmpose
mmpose copied to clipboard
GPU usage rate
Hello thanks for the great repo, very useful. I tried to benchmark the speed of different whole-body top-down models on my machine (Telsa-4, 16Gb VRAM). Everything works well but the demo script uses only 20% of my GPU. How to make the gpu usage rate higher ?
Hi, you can try this tool for inference speed benchmarking: https://github.com/open-mmlab/mmpose/blob/master/tools/analysis/benchmark_inference.py. The demo script is meant to show the usage of inference interfaces and its efficiency may be limited by I/O or visualization.
I manage to benchmark speed thanks. My question was about how to increase the usage rate? Is there something like a batch_size somewhere? I speak about inferences times not visualization.
This is a relevant question for me too.
hi @oscarfossey , @colt18 , if you are using demo scripts there is a inference related function which takes image (or images) as input. For example in top_down_img_demo.py script, the function inference_top_down_pose_model takes image input. If you (can) somehow provide list of image paths (or numpy arrays) then demo use them as batch inputs. Not sure if demo script has this feature or not by default.
In case you use configs I guess you know that there is a batch_size parameter.