litepose icon indicating copy to clipboard operation
litepose copied to clipboard

Running Video Capture Demo on Windows PC

Open Greendogo opened this issue 2 years ago • 10 comments

Hey there, could you provide some instruction on setting up and running inference using a webcam on a Windows PC?

I'm getting stuck at the 'tvm' part.

Greendogo avatar Jul 20 '22 20:07 Greendogo

Yes, this would be very helpful for testing the algorithm on various devices running Windows. It would also be better if you could give more information on how one could enable/disable GPU device(s). Thanks

sushil-bharati avatar Jul 21 '22 01:07 sushil-bharati

Same here, using the jetson demo, it states ModuleNotFoundError: No module named 'tvm'

kevkid avatar Jul 22 '22 22:07 kevkid

The nano_demo is tested on Jetson Nano with TVM support. If you are using Jetson Nano, you could follow this guide to install TVM. If you are using other devices, @MemorySlices could you adapt the TVM demo to a PyTorch model one for a more general demo?

lmxyy avatar Jul 22 '22 23:07 lmxyy

@lmxyy Do you know if the models are CPU friendly? Do we "require" GPU to run them optimally? I tried it in my CPU-only environment and it takes ~1.96 sec to process a frame (448x448x3). Am I doing sth wrong?

sushil-bharati avatar Jul 22 '22 23:07 sushil-bharati

The model should be CPU-friendly, as we also include some results of Raspberry Pi and it only takes ~100ms. But if you directly run the PyTorch model using CPU, I think your result is reasonable, as the CPU backend is not well-optimized.

lmxyy avatar Jul 22 '22 23:07 lmxyy

Thank you, @lmxyy for the prompt response. That explains why I am getting such a slow speed. I am indeed using model(s) using Pytorch's CPU backend settings. So, is there a way that I can run the optimized model(s) on a CPU-only env, or is that out of scope?

sushil-bharati avatar Jul 22 '22 23:07 sushil-bharati

You could try TVM to optimize your CPU backend. But I think this will cost your much more time...

lmxyy avatar Jul 24 '22 22:07 lmxyy

Hi @sushil-bharati would it be possible to share how you got it to run using the pytorch cpu backend? I tried doing model(img) and got:

conv2d() received an invalid combination of arguments - got (numpy.ndarray, Parameter, NoneType, tuple, tuple, tuple, int), but expected one of:
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (numpy.ndarray, Parameter, NoneType, tuple, tuple, tuple, int)
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (numpy.ndarray, Parameter, NoneType, tuple, tuple, tuple, int)

Thank you

kevkid avatar Jul 28 '22 23:07 kevkid

Hello, I'd like to ask why I can't find this scheduler in the sentence from scheduler import warmup designer in dist_train file. What's the reason?

731076467 avatar Aug 02 '22 03:08 731076467

Hi, Please ignore it and delete the corresponding import.

Best, Yihan

731076467 @.***> 于 2022年8月2日周二 上午11:03写道:

Hello, I'd like to ask why I can't find this scheduler in the sentence from scheduler import warmup designer in dist_train file. What's the reason?

— Reply to this email directly, view it on GitHub https://github.com/mit-han-lab/litepose/issues/13#issuecomment-1201961015, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANUHL3R34OKYCFMODBK7RCLVXCFX3ANCNFSM54FEHHLA . You are receiving this because you were mentioned.Message ID: @.***>

MemorySlices avatar Aug 02 '22 08:08 MemorySlices