MobileNet-YOLO icon indicating copy to clipboard operation
MobileNet-YOLO copied to clipboard

Inference time for python is too high

Open Jucjiaswiss opened this issue 5 years ago • 2 comments

Hi, I used model models/mobilenetv2_voc/yolo_lite/train_pruned.prototxt for training and models/mobilenetv2_voc/yolo_lite/yolov3_lite_deploy_pruned.prototxt for test, test script used examples/yolo/detect.py for referece, the inference time is 500ms, which is too slow, is there anything wrong? or what can i do? traing: GPU, ubuntu 16.04

Jucjiaswiss avatar Jan 03 '20 07:01 Jucjiaswiss

in detect.py, the default inference mode is cpu (in main function caffe.set_mode_cpu()) you should change it into caffe.set_mode_gpu()

AnmachenGuo avatar Jan 06 '20 04:01 AnmachenGuo

@AnmachenGuo Thanks for help! yeah,I tried that, it turned out to be the right result(only 30ms). But the cpu time is still comsuming (500ms), what possible measures chould be done to improve?

Jucjiaswiss avatar Jan 06 '20 11:01 Jucjiaswiss