FairMOT
FairMOT copied to clipboard
Evaluating Validation Data on Custom Dataset
I use the code to train on my dataset. The training process finished and was going well. I tested it on video file by running demo.py on my trained model the tracking output is good.
However, when I tried to check for performance in terms of accuracy, IDS etc on my validation data set I got the error. How could I solve this? Or which part of the code should I modify to handle it for my custom dataset?
(FairMOT) D:\FairMOT\src>python track.py mot --load_model ../models/model_30.pth --conf_thres 0.6 --arch hrnet_32 --reid_dim 128
Fix size testing.
training chunk_sizes: [2]
The output will be saved to D:\FairMOT\src\lib\..\..\exp\mot\default
heads {'hm': 1, 'wh': 2, 'id': 128, 'reg': 2}
2020-06-21 12:43:29 [INFO]: start seq: V000
Traceback (most recent call last):
File "track.py", line 255, in <module>
save_videos=False)
File "track.py", line 102, in main
dataloader = datasets.LoadImages(osp.join(data_root, seq, 'img1'), opt.img_size)
File "D:\FairMOT\src\lib\datasets\dataset\jde.py", line 31, in __init__
self.nF = len(self.files) # number of image files
AttributeError: 'LoadImages' object has no attribute 'files'
You can change the format of your custom dataset and make it the same as MOTChallenge dataset because the evaluation code only support the MOT format dataset.
Thank you so much for your reply.
Initially, I put in the Caltech dataset format. As I did for training. Let me change it and evaluate my results.
Thank you.
@NaifahNurya did you train on custom dataset If you did ,please let me know what all changes did you make and what dataset did you use? Humbly looking forward to hearing from you. Regards
@sagarkarki136 What I did is only to prepare the data based on this link https://github.com/Zhongdao/Towards-Realtime-MOT/blob/master/DATASET_ZOO.md as suggested by @ifzhang . You should follow all these step. I advisd you to download Caltech Pedestrian dataset and see how was it labeled. Then, add yourdataset.val and yourdataset.train in this folder https://github.com/ifzhang/FairMOT/tree/master/src/data. Also you have to modify your path in data.json file which located in this https://github.com/ifzhang/FairMOT/tree/master/src/lib/cfg
@NaifahNurya Thanks.Only need to put the dataset to src/data:
eg:
src/data/MOT16
if you do not use the MOT, may you change there(src/track.py line151-158):
@ifzhang & @gmt710
I did as suggested, by changing the format of dataset to be the same as MOTChallenge, I run the following command for evaluation;
python track.py mot --load_model ../exp/mot/all_hrnet/model_30.pth --conf_thres 0.6 --arch hrnet_32 --K 128 --reid_dim 128
It starts to run, with the following until it finished:
Fix size testing.
training chunk_sizes: [2]
The output will be saved to D:\FairMOT\src\lib\..\..\exp\mot\default
heads {'hm': 1, 'wh': 2, 'id': 128, 'reg': 2}
2020-06-29 13:42:19 [INFO]: start seq: V003
Creating model...
loaded ../exp/mot/all_hrnet/model_50.pth, epoch 50
2020-06-29 13:42:36 [INFO]: Processing frame 0 (100000.00 fps)
2020-06-29 13:42:42 [INFO]: Processing frame 20 (3.64 fps)
.
.
.
.
2020-06-29 13:49:39 [INFO]: Processing frame 1740 (4.48 fps)
2020-06-29 13:49:44 [INFO]: Processing frame 1760 (4.48 fps)
2020-06-29 13:49:49 [INFO]: Processing frame 1780 (4.48 fps)
2020-06-29 13:49:54 [INFO]: save results to ./data/MOT/JDE/VAL/test\..\results\MOT15_val_all_dla34\V003.txt
2020-06-29 13:49:54 [INFO]: Evaluate seq: V003
2020-06-29 13:49:55 [INFO]: Time elapsed: 401.60 seconds, FPS: 4.48
On the V003.txt there are tracking results, but on evaluation file (excell file) and on the desktop the output gives nothing as shown below
IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP IDt IDa IDm V003 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 21788 0 0 0 -inf% nan 0 0 0 OVERALL 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 21788 0 0 0 -inf% nan 0 0 0
What can be the reason for this??
This is my data,
And det folder, there is ground truth as shown below,
And this is the format
1,-1,330.85,410.18,193.3,124.71,1,-1,-1,-1
Any suggestions on this??
@NaifahNurya Hello, add this
--val_mot16 True
@gmt710 am not using MOT_16, I use my own data. Can that be valid??
val_mot16 True
@NaifahNurya I guess “gt” is needed.
I use test set for validation @gmt710 . So I have det folder, and inside it there is det.txt
I use test set for validation @gmt710 . So I have det folder, and inside it there is det.txt
Hmmm, Can you try it? I've met your problem in mot16 because I use the test set that gt is not included.
Okay, I will try it, and gives you feedback.
But in gt.txt, what does the last value present,
eg bolded value below (from MOT16> train > MOT16-02>gt>)
594,74,1043,445,32,97,1,1,0
463,70,1109,449,35,100,1,1,0.018152
332,66,1006,453,18,61**,0,7,0**
@NaifahNurya https://blog.csdn.net/yuanlulu/article/details/79818599
@ifzhang , by that reason, It means we cannot evaluate MOT16-test because there is no gt.txt?
We can evaluate only MOT16-train, since there is gt.txt??
Yes, we can only evaluate on MOT16-train. If we want to evaluate MOT16-test, we need to submit the results to the MOT Challenge website.
I am wondering why the MOT online evaluator requires to submit the txt file for all sequences including the training set. Do we just need to submit the GT files for the training set ?
Yes, we can only evaluate on MOT16-train. If we want to evaluate MOT16-test, we need to submit the results to the MOT Challenge website.
exactly, i have tried evaluate MOT16-test locally, just got nothing as @NaifahNurya presented above.
@NaifahNurya Excuse me. I would like to ask how to get the label text corresponding to each picture in Caltech data set. I only got images and annotations.json from the reference code.
@ifzhang I am trying to run tracking on MOT15,16,17,20 . For that I need to submit the results .txt files to the evaluation server. Can you tell me how many times can we submit the files to the server? Secondly, for MOT15 dataset, seqinfo.ini files are provided only for the train set. Can you please provide the same for test set too or let me know how to get them.
@NaifahNurya Hello, add this
--val_mot16 True
I used mot15.and I added --val_mot15,however,it still dislplayed nan%
@NaifahNurya Hello, add this
--val_mot16 True
I used mot15.and I added --val_mot15,however,it still dislplayed nan%
check if the directory structure is correct according to the one given in the official repo and generate labels for the Ground truth annotations incase you didn't do it and then run it again.
Yes, we can only evaluate on MOT16-train. If we want to evaluate the MOT16-test, we need to submit the results to the MOT Challenge website.
For implementing evaluation on MOT train, do we need to generate the labels_with_ids using gen_labels_20.py, or gt.txt file works? Cause I have prepared custom training data in MOT20 format and thinking of using the MOT20 train dataset for eval.
Thanks for the help!!