ByteTrack
ByteTrack copied to clipboard
Error when running demo_track.py - does not work on CPU
I have created a VM with Linux Ubuntu and installed all dependencies. I guess the following error is due to having no GPU. It seems the code does not work correctly for device='cpu'. Do you know how to fix your code to support running pretrained models with tracking part on CPU only?
Thanks for your support!
(MyEnv)root@mv:/home/mv/ByteTrack# python3 tools/demo_track.py video -f exps/example/mot/yolox_x_mix_det.py -c pretrained/bytetrack_x_mot17.pth.tar --fp16 --fuse --save_result --device=cpu
Matplotlib is building the font cache; this may take a moment.
2021-11-04 | INFO | __main__:main:298 - Args: Namespace(camid=0, ckpt='pretrained/bytetrack_x_mot17.pth.tar', conf=None, demo='video', device='cpu', exp_file='exps/example/mot/yolox_x_mix_det.py', experiment_name='yolox_x_mix_det', fp16=True, fuse=True, match_thresh=0.8, min_box_area=10, mot20=False, name=None, nms=None, path='./videos/palace.mp4', save_result=True, track_buffer=30, track_thresh=0.5, trt=False, tsize=None)
[W NNPACK.cpp:79] Could not initialize NNPACK! Reason: Unsupported hardware.
/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448265233/work/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
2021-11-04 | INFO | __main__:main:308 - Model Summary: Params: 99.00M, Gflops: 791.73
2021-11-04 | INFO | __main__:main:319 - loading checkpoint
2021-11-04 | INFO | __main__:main:323 - loaded checkpoint done.
2021-11-04 | INFO | __main__:main:326 - Fusing model...
/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/module.py:561: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more information.
if param.grad is not None:
2021-11-04 | INFO | __main__:imageflow_demo:240 - video save_path is ./YOLOX_outputs/yolox_x_mix_det/track_vis/2021_11_04/palace.mp4
2021-11-04 | INFO | __main__:imageflow_demo:250 - Processing frame 0 (100000.00 fps)
Traceback (most recent call last):
File "tools/demo_track.py", line 357, in <module>
main(exp, args)
File "tools/demo_track.py", line 350, in main
imageflow_demo(predictor, vis_folder, current_time, args)
File "tools/demo_track.py", line 253, in imageflow_demo
outputs, img_info = predictor.inference(frame, timer)
File "tools/demo_track.py", line 166, in inference
outputs = self.model(img)
File "/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/mv/ByteTrack/yolox/models/yolox.py", line 30, in forward
fpn_outs = self.backbone(x)
File "/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/mv/ByteTrack/yolox/models/yolo_pafpn.py", line 93, in forward
out_features = self.backbone(input)
File "/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/mv/ByteTrack/yolox/models/darknet.py", line 169, in forward
x = self.stem(x)
File "/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/mv/ByteTrack/yolox/models/network_blocks.py", line 210, in forward
return self.conv(x)
File "/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/mv/ByteTrack/yolox/models/network_blocks.py", line 54, in fuseforward
return self.act(self.conv(x))
File "/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 443, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 440, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.HalfTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
You can try running the command by deleting '--fp16'.
@ifzhang When I run, demo_track.py, the error occurr "cannot import name 'plot_tracking' from 'yolox.utils.visualize' (/mnt/d/PycharmProjects/yolox_deepsort_devs/yolox/utils/visualize.py)".
@ImSuMyatNoe did you find the reason?
You can try running the command by deleting '--fp16'.
It works well with 'fp16=False'. Thanks for your kind guide. And I recommend others to see this: https://programmerah.com/runtimeerror-unfolded2d_copy-not-implemented-for-half-19943/
@ifzhang When I run, demo_track.py, the error occurr "cannot import name 'plot_tracking' from 'yolox.utils.visualize' (/mnt/d/PycharmProjects/yolox_deepsort_devs/yolox/utils/visualize.py)".
visualize.py line 8 note or delete it all = ["vis"]
@ifzhang When I run, demo_track.py, the error occurr "cannot import name 'plot_tracking' from 'yolox.utils.visualize' (/mnt/d/PycharmProjects/yolox_deepsort_devs/yolox/utils/visualize.py)".
visualize.py line 8 note or delete it all = ["vis"]
it seems not work
@ifzhang When I run, demo_track.py, the error occurr "cannot import name 'plot_tracking' from 'yolox.utils.visualize' (/mnt/d/PycharmProjects/yolox_deepsort_devs/yolox/utils/visualize.py)".
visualize.py line 8 note or delete it all = ["vis"]
it seems not work
maybe use python run demo_track.py instead of python3