2018AICity_TeamUW icon indicating copy to clipboard operation
2018AICity_TeamUW copied to clipboard

How to run the yolo detection with the model provided?

Open estimate123 opened this issue 4 years ago • 1 comments

Hi, I want to know how can I run yolo detection with your model provided? I don't understand the given examples in your bashfile track1.sh ./darknet procimgflr cfg/aicity.data cfg/yolo-voc.cfg yolo-voc_final.weights /home/ipl_gpu/Thomas/aicity18/Track1/Loc1_1/img1/ /home/ipl_gpu/Thomas/aicity18/Track1/Loc1_1/detimg1/ /home/ipl_gpu/Thomas/aicity18/Track1/Loc1_1/det.txt .1 .5 0 1799

What are the images in '/home/ipl_gpu/Thomas/aicity18/Track1/Loc1_1/detimg1/ ', what are they used for? Is /Loc1_1/det.txt the file that saves the result( in the format of MOTChanllenge)? And what is 'procimgflr' option? On my conputer I try to run this command but my computer gives me error(Not an option: procimgflr). Sorry, I'm really new to this, right now I want to try to use your given model to successfully detect and get the MOT format output, your help would means a lot to me, thanks : )

I

estimate123 avatar Aug 06 '20 16:08 estimate123

Here is the definition of all the parameters:

./darknet procimgflr <config file> <model weights> <input image folder> <output image folder> <output text file> <confidence threshold> <hierarchy threshold> <starting frame count> <ending frame count>

Here is the corresponding function to call for procimgflr (processing image folder): https://github.com/zhengthomastang/2018AICity_TeamUW/blob/master/Track1/3_YOLO_VEH/examples/detector.c#L699-L764

zhengthomastang avatar Aug 07 '20 07:08 zhengthomastang