deep-high-resolution-net.pytorch icon indicating copy to clipboard operation
deep-high-resolution-net.pytorch copied to clipboard

how to do the inference using the webcam?

Open leonzgtee opened this issue 5 years ago • 8 comments

i have try the hrnet_w32 and it shows a desirable result with 73.5 mAP. so i want to use this model in real-time condition with my camera. the problem is that i can not get the params c and s in fuction get_affine_transform(), inotice that these two params are achieved from json file in train and test stage.could you please tell me how to do inference with hrnet, thank you very much.

leonzgtee avatar Nov 05 '19 08:11 leonzgtee

You can assign c as center of bounding box of a human, and s depends on a size of width and height of your bounding box.

soonchanpark avatar Nov 07 '19 05:11 soonchanpark

@leonzgtee Hi did you do it with real time for Camera?

lisa676 avatar Nov 25 '19 03:11 lisa676

For real time demo, you need to add a person detector to get the bounding box of a detected person.

leoxiaobin avatar Dec 01 '19 15:12 leoxiaobin

@leonzgtee , @lan786 I'm going to use HRNet in real time condition with my camera too, did you do it?

mpourkeshavarz avatar Feb 22 '20 10:02 mpourkeshavarz

@MozhganPourKeshavarz, @leonzgtee , @lan786 Hi, what about real-time realization? any results?

Ru-Van avatar Mar 05 '20 15:03 Ru-Van

Hi all,

With w32, 256x192, batch size 1, the network achieved round 40fps(with pytorch 1.0) for Linux and 15fps for Windows(with pytorch 0.4.0). We did not consider the time for human detection. (with 1080Ti)

Generally, the inference of the network is slower in Windows than Linux. It seems to be bug of pytorch (spending three times more time). When I tested in pytorch 0.4.0, the performance is much better than that of pytorch 1.0 or 1.1, but still slower than the environment with Pytorch 1.0 or 1.1 in Linux.

One more tip, if you want to implement real-time application with high performance graphic card, you can enlarge input size in order to estimate human pose precisely. There is a little speed changes between 256x192 and 384x288. I guess the parallel computation can achieve it.

soonchanpark avatar Mar 06 '20 00:03 soonchanpark

i'm going to use the model in real time condition with my camera ,but i don't know how to do it

WGY907 avatar Jun 27 '20 12:06 WGY907

Hi all, With w32, 256x192, batch size 1, the network achieved round 40fps(with pytorch 1.0) for Linux and 15fps for Windows(with pytorch 0.4.0). We did not consider the time for human detection. (with 1080Ti) Generally, the inference of the network is slower in Windows than Linux. It seems to be bug of pytorch (spending three times more time). When I tested in pytorch 0.4.0, the performance is much better than that of pytorch 1.0 or 1.1, but still slower than the environment with Pytorch 1.0 or 1.1 in Linux. One more tip, if you want to implement real-time application with high performance graphic card, you can enlarge input size in order to estimate human pose precisely. There is a little speed changes between 256x192 and 384x288. I guess the parallel computation can achieve it.

Hi all, With w32, 256x192, batch size 1, the network achieved round 40fps(with pytorch 1.0) for Linux and 15fps for Windows(with pytorch 0.4.0). We did not consider the time for human detection. (with 1080Ti) Generally, the inference of the network is slower in Windows than Linux. It seems to be bug of pytorch (spending three times more time). When I tested in pytorch 0.4.0, the performance is much better than that of pytorch 1.0 or 1.1, but still slower than the environment with Pytorch 1.0 or 1.1 in Linux. One more tip, if you want to implement real-time application with high performance graphic card, you can enlarge input size in order to estimate human pose precisely. There is a little speed changes between 256x192 and 384x288. I guess the parallel computation can achieve it.

i'm going to use the model in real time condition with my camera,but i failed.Could you share your code with me ? Thank you !

dingding-ding avatar Mar 17 '21 12:03 dingding-ding