CenterNet
CenterNet copied to clipboard
Actual COCO Validation FPS does not match CenterNet Paper
I compared YOLO and CenterNet against FPS and MAP on my computer. Fps may vary in different computer environments, but at least the fps on Centernet must be greater than yolo. but acutally it doesn't.
I used the command like this which exists in CenterNet/experiments/ddd_3dop.sh
python test.py ctdet --not_prefetch_test --exp_id coco_dla --kep_res --load_model ..\models\ctdet_coco_dla_2x.pth
I checked FPS by adding some codes on CenterNet/src/test.py ` f = open("fps_test.txt", 'w') #fps_write
for ind in range(num_iters): img_id = dataset.images[ind] img_info = dataset.coco.loadImgs(ids=[img_id])[0] img_path = os.path.join(dataset.img_dir, img_info['file_name']) print(img_path)
#print fps
start_time = time.time()
if opt.task == 'ddd':
ret = detector.run(img_path, img_info['calib'])
else:
ret = detector.run(img_path)
results[img_id] = ret['results']
Bar.suffix = '[{0}/{1}]|Tot: {total:} |ETA: {eta:} '.format(
ind, num_iters, total=bar.elapsed_td, eta=bar.eta_td)
for t in avg_time_stats:
avg_time_stats[t].update(ret[t])
Bar.suffix = Bar.suffix + '|{} {:.3f} '.format(t, avg_time_stats[t].avg)
bar.next()
f.write(str(1.0 / (time.time() - start_time)) + '\n') # fps_write
`
Could you explain why CenterNet's FPS and Accuracy is lower than YOLO (except MAP)?
And if there is something I missed, could you please metion it!!
Interesting question. Replicating paper now and, by first tests, also curious about this question. Can we have an official comment?
We appreciate your challenge. Here are a few reasons:
- We report the FPS based on network forward time and decode time. In your commend you have added
--not_prefetch_test
. It introduces significant dataloading and pre-processing time. As far as we know, Yolo only reports network forward time without NMS time here. If remembered correctly, YOLO also did batched testing. - We do observe a slowdown on our windows machine. There might be some platform difference from pytorch. The C++ yolo can be more robust to that.
- We are comparing to the official YOLO in our paper, while your numbers are non-official (AP 30 vs official 33). If you like to compare in a lower AP, you can try our method with a lower resolution (or without --keep_res) or smaller model (res18_dcn). Hope this resolves your doubt. However if you really prefer the YOLO-biased setting, feel free to keep using YOLO.
Hi,
I got to test it too on a Linux and a Windows each with a different GPU so I can give you my take on this. In the paper YOLO does not use batched testing and NMS is non included. The preprocessing is also non included too but it is because there is None. CenterNet does the same but by not including the HUGE preprocessing time (which takes about the same time than the forward pass sometimes) it is also biased. This is why CenterNet is sadly never faster than YOLO in a "common" out of the box setup. That said if you do the preprocessing before hand it is a bit faster than YOLO, just like the paper says so they are not lying. Without the preprocessing & natively I got it up to 52fps on a V100 which is a couple fps over YOLO608 in a single pass fp32 (but the higher I got for YOLO608 is 102 fps with batching & int8...). I guess it will be better with these 2 features once they come up (maybe we should make a feature request). The articles generally give non optimised numbers and in a special scenario so be carefull !
I admit this preprocessing is annoying, but it's just like the nms : you just need to be able to handle a bit of lag and do it faster than your network if you want to have the best speed. But if you are looking for the less lag possible, it won't work out. I would still use YOLO for the moment though since it's easier to use and more supported, but give Centernet a while and it will be better than YOLO in most scenarios !
Thank you for your answer.
As you mentioned, I tested ctdet network by deleting argument --not_prefetched_test
and lower resolution & smaller network in in Ubuntu.
Actually, lower resolution does not make any difference but others are working on higher FPS than before.
Could you please answer another question? Why doesn't data prefetching working In window machine ?
hello,when i add --not_prefetched_test
, i can run successfully. However, when i delete it, there will be an error:
Traceback (most recent call last): File "test.py", line 129, in <module> prefetch_test(opt) File "test.py", line 80, in prefetch_test dataset.run_eval(results, opt.save_dir) File "/CenterNet/src/lib/datasets/dataset/sfod.py", line 115, in run_eval coco_dets = self.coco.loadRes('{}/results.json'.format(save_dir)) File "/home/amie/.local/lib/python3.5/site-packages/pycocotools-2.0-py3.5-linux-x86_64.egg/pycocotools/coco.py", line 317, in loadRes 'Results do not correspond to current coco set' AssertionError: Results do not correspond to current coco set
Have you ever been in this situation?thx
Hi, @xingyizhou So, the FPS metrics claimed in the original paper of CenterNet are obtained without batch testing, right?
hello,when i add
--not_prefetched_test
, i can run successfully. However, when i delete it, there will be an error:Traceback (most recent call last): File "test.py", line 129, in <module> prefetch_test(opt) File "test.py", line 80, in prefetch_test dataset.run_eval(results, opt.save_dir) File "/CenterNet/src/lib/datasets/dataset/sfod.py", line 115, in run_eval coco_dets = self.coco.loadRes('{}/results.json'.format(save_dir)) File "/home/amie/.local/lib/python3.5/site-packages/pycocotools-2.0-py3.5-linux-x86_64.egg/pycocotools/coco.py", line 317, in loadRes 'Results do not correspond to current coco set' AssertionError: Results do not correspond to current coco set
Have you ever been in this situation?thx
I have the same error with you when i delete the --not_prefetched_test in windows machine , have you solved this problem now?
Are you able to resolve the issue, I am facing the same issue. Any help please?? Thx