deep_sort_pytorch icon indicating copy to clipboard operation
deep_sort_pytorch copied to clipboard

Evaluation Results with MOT16

Open Mary-xl opened this issue 4 years ago • 12 comments

Dear @ZQPei I evaluated the performance using MOT16 training datasets and got the following result, just wondering is it reasonable? IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP IDt IDa IDm MOT16-02 24.9% 54.0% 16.2% 25.2% 83.7% 54 5 19 30 873 13348 70 190 19.9% 0.316 75 13 19 MOT16-04 40.3% 65.2% 29.1% 37.8% 84.6% 83 6 42 35 3279 29586 124 600 30.6% 0.292 100 30 16 MOT16-05 35.4% 44.9% 29.3% 54.8% 84.0% 125 27 65 33 712 3079 64 155 43.5% 0.300 138 5 82 MOT16-09 40.6% 51.6% 33.4% 58.5% 90.6% 25 6 17 2 321 2180 59 94 51.3% 0.271 59 4 10 MOT16-10 32.0% 46.2% 24.5% 41.9% 79.0% 54 9 21 24 1376 7151 132 321 29.7% 0.304 132 15 32 MOT16-11 36.7% 46.6% 30.3% 57.6% 88.7% 69 13 23 33 672 3888 35 87 49.9% 0.257 55 5 26 MOT16-13 23.5% 42.5% 16.3% 25.5% 66.5% 107 6 40 61 1472 8530 131 379 11.5% 0.352 146 22 47 OVERALL 34.8% 54.8% 25.5% 38.6% 83.0% 517 72 227 218 8705 67762 615 1826 30.2% 0.295 705 94 232

The MOTA is between 11.5% (MOT16-13) to 51.3%(MOT16-09). I'm using yolov3.weights. The highest performance was relatively close to the deepsort author claimed 61.4%. However, the MOTP value is much worse than the author claimed 79.1%. here I got 0.352. Wondering is it correct?

I noticed many people ask for the original MOT16 videos to test. I downloaded the videos from their website but found the result quite strange(many result are either nan or negative). Then I found these videos have been resized to half width and half height. I therefore convert the image frames downloaded from MOT16 dataset into videos and use these videos as input for our program and got the answer as above.

Mary-xl avatar Apr 29 '20 04:04 Mary-xl

Hi guy, can you clarify how you produce result to test on MOT16? This repo only contains code for demo, but as far as I know, the MOT16 requires results in csv-text file. Could you please share your code for evaluating? By the way, which type of MOT devkit did you use? The Matlab one or the py-motmetrics one? Thanks!

pvti avatar May 02 '20 09:05 pvti

Dear @ZQPei I evaluated the performance using MOT16 training datasets and got the following result, just wondering is it reasonable? IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP IDt IDa IDm MOT16-02 24.9% 54.0% 16.2% 25.2% 83.7% 54 5 19 30 873 13348 70 190 19.9% 0.316 75 13 19 MOT16-04 40.3% 65.2% 29.1% 37.8% 84.6% 83 6 42 35 3279 29586 124 600 30.6% 0.292 100 30 16 MOT16-05 35.4% 44.9% 29.3% 54.8% 84.0% 125 27 65 33 712 3079 64 155 43.5% 0.300 138 5 82 MOT16-09 40.6% 51.6% 33.4% 58.5% 90.6% 25 6 17 2 321 2180 59 94 51.3% 0.271 59 4 10 MOT16-10 32.0% 46.2% 24.5% 41.9% 79.0% 54 9 21 24 1376 7151 132 321 29.7% 0.304 132 15 32 MOT16-11 36.7% 46.6% 30.3% 57.6% 88.7% 69 13 23 33 672 3888 35 87 49.9% 0.257 55 5 26 MOT16-13 23.5% 42.5% 16.3% 25.5% 66.5% 107 6 40 61 1472 8530 131 379 11.5% 0.352 146 22 47 OVERALL 34.8% 54.8% 25.5% 38.6% 83.0% 517 72 227 218 8705 67762 615 1826 30.2% 0.295 705 94 232

The MOTA is between 11.5% (MOT16-13) to 51.3%(MOT16-09). I'm using yolov3.weights. The highest performance was relatively close to the deepsort author claimed 61.4%. However, the MOTP value is much worse than the author claimed 79.1%. here I got 0.352. Wondering is it correct?

I noticed many people ask for the original MOT16 videos to test. I downloaded the videos from their website but found the result quite strange(many result are either nan or negative). Then I found these videos have been resized to half width and half height. I therefore convert the image frames downloaded from MOT16 dataset into videos and use these videos as input for our program and got the answer as above.

Hi guys, can you clarify how you got the test result on MOT16?we can communicate through email [email protected]

Angel0003 avatar Jun 10 '20 01:06 Angel0003

Hi guy, can you clarify how you produce result to test on MOT16? This repo only contains code for demo, but as far as I know, the MOT16 requires results in csv-text file. Could you please share your code for evaluating? By the way, which type of MOT devkit did you use? The Matlab one or the py-motmetrics one? Thanks!

Hi guys,do you know how to get the test result on MOT16?I got some error when i run the script yolov3_deepsort_eval.py

Angel0003 avatar Jun 10 '20 10:06 Angel0003

Dear @ZQPei I evaluated the performance using MOT16 training datasets and got the following result, just wondering is it reasonable? IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP IDt IDa IDm MOT16-02 24.9% 54.0% 16.2% 25.2% 83.7% 54 5 19 30 873 13348 70 190 19.9% 0.316 75 13 19 MOT16-04 40.3% 65.2% 29.1% 37.8% 84.6% 83 6 42 35 3279 29586 124 600 30.6% 0.292 100 30 16 MOT16-05 35.4% 44.9% 29.3% 54.8% 84.0% 125 27 65 33 712 3079 64 155 43.5% 0.300 138 5 82 MOT16-09 40.6% 51.6% 33.4% 58.5% 90.6% 25 6 17 2 321 2180 59 94 51.3% 0.271 59 4 10 MOT16-10 32.0% 46.2% 24.5% 41.9% 79.0% 54 9 21 24 1376 7151 132 321 29.7% 0.304 132 15 32 MOT16-11 36.7% 46.6% 30.3% 57.6% 88.7% 69 13 23 33 672 3888 35 87 49.9% 0.257 55 5 26 MOT16-13 23.5% 42.5% 16.3% 25.5% 66.5% 107 6 40 61 1472 8530 131 379 11.5% 0.352 146 22 47 OVERALL 34.8% 54.8% 25.5% 38.6% 83.0% 517 72 227 218 8705 67762 615 1826 30.2% 0.295 705 94 232

The MOTA is between 11.5% (MOT16-13) to 51.3%(MOT16-09). I'm using yolov3.weights. The highest performance was relatively close to the deepsort author claimed 61.4%. However, the MOTP value is much worse than the author claimed 79.1%. here I got 0.352. Wondering is it correct?

I noticed many people ask for the original MOT16 videos to test. I downloaded the videos from their website but found the result quite strange(many result are either nan or negative). Then I found these videos have been resized to half width and half height. I therefore convert the image frames downloaded from MOT16 dataset into videos and use these videos as input for our program and got the answer as above.

I got the same result as u. Have u solved the problem?

LYZIP avatar Jul 03 '20 08:07 LYZIP

MOT16-01 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 3620 0 0 0 -inf% nan 0 0 0 MOT16-03 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 54495 0 0 0 -inf% nan 0 0 0 MOT16-06 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 9982 0 0 0 -inf% nan 0 0 0 MOT16-07 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 7471 0 0 0 -inf% nan 0 0 0 MOT16-08 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 7087 0 0 0 -inf% nan 0 0 0 MOT16-12 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 4734 0 0 0 -inf% nan 0 0 0 MOT16-14 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 7451 0 0 0 -inf% nan 0 0 0 OVERALL 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 94840 0 0 0 -inf% nan 0 0 0 I ran the yolo_deepsort_eval.py, but only got the result of FP. would you please tell me what went wrong? thanks a lot!

fyture avatar Nov 01 '20 13:11 fyture

MOT16-01 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 3620 0 0 0 -inf% nan 0 0 0 MOT16-03 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 54495 0 0 0 -inf% nan 0 0 0 MOT16-06 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 9982 0 0 0 -inf% nan 0 0 0 MOT16-07 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 7471 0 0 0 -inf% nan 0 0 0 MOT16-08 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 7087 0 0 0 -inf% nan 0 0 0 MOT16-12 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 4734 0 0 0 -inf% nan 0 0 0 MOT16-14 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 7451 0 0 0 -inf% nan 0 0 0 OVERALL 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 94840 0 0 0 -inf% nan 0 0 0 I ran the yolo_deepsort_eval.py, but only got the result of FP. would you please tell me what went wrong? thanks a lot!

hi, I got the same result as u. Have u solved the problem?

lvZic avatar May 23 '21 08:05 lvZic

Dear @ZQPei I evaluated the performance using MOT16 training datasets and got the following result, just wondering is it reasonable? IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP IDt IDa IDm MOT16-02 24.9% 54.0% 16.2% 25.2% 83.7% 54 5 19 30 873 13348 70 190 19.9% 0.316 75 13 19 MOT16-04 40.3% 65.2% 29.1% 37.8% 84.6% 83 6 42 35 3279 29586 124 600 30.6% 0.292 100 30 16 MOT16-05 35.4% 44.9% 29.3% 54.8% 84.0% 125 27 65 33 712 3079 64 155 43.5% 0.300 138 5 82 MOT16-09 40.6% 51.6% 33.4% 58.5% 90.6% 25 6 17 2 321 2180 59 94 51.3% 0.271 59 4 10 MOT16-10 32.0% 46.2% 24.5% 41.9% 79.0% 54 9 21 24 1376 7151 132 321 29.7% 0.304 132 15 32 MOT16-11 36.7% 46.6% 30.3% 57.6% 88.7% 69 13 23 33 672 3888 35 87 49.9% 0.257 55 5 26 MOT16-13 23.5% 42.5% 16.3% 25.5% 66.5% 107 6 40 61 1472 8530 131 379 11.5% 0.352 146 22 47 OVERALL 34.8% 54.8% 25.5% 38.6% 83.0% 517 72 227 218 8705 67762 615 1826 30.2% 0.295 705 94 232

The MOTA is between 11.5% (MOT16-13) to 51.3%(MOT16-09). I'm using yolov3.weights. The highest performance was relatively close to the deepsort author claimed 61.4%. However, the MOTP value is much worse than the author claimed 79.1%. here I got 0.352. Wondering is it correct?

I noticed many people ask for the original MOT16 videos to test. I downloaded the videos from their website but found the result quite strange(many result are either nan or negative). Then I found these videos have been resized to half width and half height. I therefore convert the image frames downloaded from MOT16 dataset into videos and use these videos as input for our program and got the answer as above.

I get similar results.The MOTA and MOTP are much worse than the author's.Have you found out the reason? THANKS

WGY907 avatar Jun 11 '21 03:06 WGY907

Dear @ZQPei I evaluated the performance using MOT16 training datasets and got the following result, just wondering is it reasonable? IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP IDt IDa IDm MOT16-02 24.9% 54.0% 16.2% 25.2% 83.7% 54 5 19 30 873 13348 70 190 19.9% 0.316 75 13 19 MOT16-04 40.3% 65.2% 29.1% 37.8% 84.6% 83 6 42 35 3279 29586 124 600 30.6% 0.292 100 30 16 MOT16-05 35.4% 44.9% 29.3% 54.8% 84.0% 125 27 65 33 712 3079 64 155 43.5% 0.300 138 5 82 MOT16-09 40.6% 51.6% 33.4% 58.5% 90.6% 25 6 17 2 321 2180 59 94 51.3% 0.271 59 4 10 MOT16-10 32.0% 46.2% 24.5% 41.9% 79.0% 54 9 21 24 1376 7151 132 321 29.7% 0.304 132 15 32 MOT16-11 36.7% 46.6% 30.3% 57.6% 88.7% 69 13 23 33 672 3888 35 87 49.9% 0.257 55 5 26 MOT16-13 23.5% 42.5% 16.3% 25.5% 66.5% 107 6 40 61 1472 8530 131 379 11.5% 0.352 146 22 47 OVERALL 34.8% 54.8% 25.5% 38.6% 83.0% 517 72 227 218 8705 67762 615 1826 30.2% 0.295 705 94 232

The MOTA is between 11.5% (MOT16-13) to 51.3%(MOT16-09). I'm using yolov3.weights. The highest performance was relatively close to the deepsort author claimed 61.4%. However, the MOTP value is much worse than the author claimed 79.1%. here I got 0.352. Wondering is it correct?

I noticed many people ask for the original MOT16 videos to test. I downloaded the videos from their website but found the result quite strange(many result are either nan or negative). Then I found these videos have been resized to half width and half height. I therefore convert the image frames downloaded from MOT16 dataset into videos and use these videos as input for our program and got the answer as above.

I get similar results.The MOTA and MOTP are much worse than the author's.Have you found out the reason? THANKS

MOTA

14/5000

The authors used this motmetrics(python library) as an evaluation indicator. MOTP Is described as Metric MOTP seems to be off. To convert compute (1 - MOTP) * 100. MOTChallenge benchmarks compute MOTP as percentage, while py-motmetrics sticks to the original definition of average distance over number of assigned objects [1].

I think so

WGY907 avatar Jun 14 '21 00:06 WGY907

you can look at my blog:https://blog.csdn.net/dbdxwyl/article/details/118308750

wyl2077 avatar Jun 28 '21 10:06 wyl2077

Is there any FPS requirements when converting pictures to video?i got the video from those images but the are nan or 0.0% too.

doris797 avatar Oct 11 '21 02:10 doris797

MOT16-01 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 3620 0 0 0 -inf% nan 0 0 0 MOT16-03 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 54495 0 0 0 - inf% nan 0 0 0 MOT16-06 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 9982 0 0 0 -inf% nan 0 0 0 MOT16-07 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 7471 0 0 0 -inf% 나노 0 0 0 MOT16-08 0.0% 0.0% 나노% 나노% 0.0% 0 0 0 0 7087 0 0 0 -inf% 나노 0 0 0 MOT16-12 0.0% 0.0% 나노% 나노 % 0.0% 0 0 0 0 4734 0 0 0 -inf% nan 0 0 0 MOT16-14 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 7451 0 0 0 -inf% nan 0 0 0 전체 0.0% 0.0 % nan% nan% 0.0% 0 0 0 0 94840 0 0 0 -inf% nan 0 0 0 yolo_deepsort_eval.py를 실행했는데 FP라는 결과만 나왔다. 무엇이 잘못되었는지 말씀해 주시겠습니까? 정말 감사합니다!

안녕, 나는 u와 같은 결과를 얻었습니다. 문제를 해결하셨나요?

Hello, did you solve this problem? Could you tell me your way to solve?

da1396 avatar Nov 23 '22 08:11 da1396

Hello, Questions on github  ,The MOTA and MOTP are much worse than the author's .I find the authors used this motmetrics(python library) as an evaluation indicator. MOTP Is described as Metric MOTP seems to be off. To convert compute (1 - MOTP) * 100. MOTChallenge benchmarks compute MOTP as percentage, while py-motmetrics sticks to the original definition of average distance over number of assigned objects [1]. As far as “nan”,I don't have the same problem,but I'm guessing it's your detection results,It is recommended to debug to find the problem

I hope you found it useful ------------------ Original ------------------ From: @.>; Send time: Wednesday, Nov 23, 2022 4:40 PM @.>; @.>; @.>; Subject:  Re: [ZQPei/deep_sort_pytorch] Evaluation Results with MOT16 (#133)

MOT16-01 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 3620 0 0 0 -inf% nan 0 0 0 MOT16-03 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 54495 0 0 0 - inf% nan 0 0 0 MOT16-06 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 9982 0 0 0 -inf% nan 0 0 0 MOT16-07 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 7471 0 0 0 -inf% 나노 0 0 0 MOT16-08 0.0% 0.0% 나노% 나노% 0.0% 0 0 0 0 7087 0 0 0 -inf% 나노 0 0 0 MOT16-12 0.0% 0.0% 나노% 나노 % 0.0% 0 0 0 0 4734 0 0 0 -inf% nan 0 0 0 MOT16-14 0.0% 0.0% nan% nan% 0.0% 0 0 0 0 7451 0 0 0 -inf% nan 0 0 0 전체 0.0% 0.0 % nan% nan% 0.0% 0 0 0 0 94840 0 0 0 -inf% nan 0 0 0 yolo_deepsort_eval.py를 실행했는데 FP라는 결과만 나왔다. 무엇이 잘못되었는지 말씀해 주시겠습니까? 정말 감사합니다!

안녕, 나는 u와 같은 결과를 얻었습니다. 문제를 해결하셨나요?

Hello, did you solve this problem? Could you tell me your way to solve?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

WGY907 avatar Nov 23 '22 09:11 WGY907