Detectron.pytorch
Detectron.pytorch copied to clipboard
Evaluation scripts for Custom Dataset & VOC
Hey!
I have finetuned a network on VOC 2007, which was initially trained on COCO Dataset.
I am now trying to evaluate the training on the validation set. When I run the command, python tools/test_net.py --dataset voc2007 --cfg configs/baselines/e2e_mask_rcnn_R-50-FPN_1x.yaml --load_ckpt {path/to/your/checkpoint},
I get this: INFO task_evaluation.py: 61: Evaluating bounding boxes is done!
INFO task_evaluation.py: 104: Evaluating segmentations Traceback (most recent call last):
File "tools/test_net.py", line 125, in
File "/home/deep/data/asif/Detectron/Detectron.pytorch/lib/core/test_engine.py", line 128, in run_inference all_results = result_getter()
File "/home/deep/data/asif/Detectron/Detectron.pytorch/lib/core/test_engine.py", line 108, in result_getter multi_gpu=multi_gpu_testing
File "/home/deep/data/asif/Detectron/Detectron.pytorch/lib/core/test_engine.py", line 163, in test_net_on_dataset dataset, all_boxes, all_segms, all_keyps, output_dir
File "/home/deep/data/asif/Detectron/Detectron.pytorch/lib/datasets/task_evaluation.py", line 63, in evaluate_all results = evaluate_masks(dataset, all_boxes, all_segms, output_dir)
File "/home/deep/data/asif/Detectron/Detectron.pytorch/lib/datasets/task_evaluation.py", line 128, in evaluate_masks 'No evaluator for dataset: {}'.format(dataset.name)
NotImplementedError: No evaluator for dataset: voc_2007_test
This appears from the task_evaluation file where no implementation for voc evaluation in masks. Is there a work around?
Since, voc is already in json format I guessed it is still possible to evaluate segmentations in the task_evaluation script! How to do this?
I also tried to use the VOC 2007 test data as a custom dataset. I ran this command,
python tools/test_net.py --dataset custom_dataset --cfg configs/baselines/e2e_mask_rcnn_R-50-FPN_1x.yaml --load_ckpt=/home/deep/data/asif/Detectron/Detectron.pytorch/Outputs/e2e_mask_rcnn_R-50-FPN_1x/Mar18-15-32-49_deeppc_step/ckpt/model_step162693.pth --num_classes=21
Also, made necessary changes in the file directory. Even though this is a VOC dataset, the annotations are in JSON format, so I added the line TEST.FORCE_JSON_DATASET_EVAL:TRUE
in the config file.
This is the result i got: INFO task_evaluation.py: 75: Evaluating detections INFO json_dataset_evaluator.py: 162: Writing bbox results json to: /home/deep/data/asif/Detectron/Detectron.pytorch/Outputs/e2e_mask_rcnn_R-50-FPN_1x/Mar18-15-32-49_deeppc_step/test/bbox_custom_data_test_results.json INFO task_evaluation.py: 61: Evaluating bounding boxes is done! INFO task_evaluation.py: 104: Evaluating segmentations INFO json_dataset_evaluator.py: 83: Writing segmentation results json to: /home/deep/data/asif/Detectron/Detectron.pytorch/Outputs/e2e_mask_rcnn_R-50-FPN_1x/Mar18-15-32-49_deeppc_step/test/segmentations_custom_data_test_results.json INFO task_evaluation.py: 65: Evaluating segmentations is done! INFO task_evaluation.py: 180: copypaste: Dataset: custom_data_test INFO task_evaluation.py: 182: copypaste: Task: box INFO task_evaluation.py: 185: copypaste: AP,AP50,AP75,APs,APm,APl INFO task_evaluation.py: 186: copypaste: -1.0000,-1.0000,-1.0000,-1.0000,-1.0000,-1.0000 INFO task_evaluation.py: 182: copypaste: Task: mask INFO task_evaluation.py: 185: copypaste: AP,AP50,AP75,APs,APm,APl INFO task_evaluation.py: 186: copypaste: -1.0000,-1.0000,-1.0000,-1.0000,-1.0000,-1.0000
I see that it has just printed out the empty box and mask results in the task_evaluation.py file. Is it not possible to evaluate the predictions for dataset other than COCO dataset? My ultimate goal is to use a custom dataset with different number of classes and so this might be of good help to me!
I have not implemented mask evaluation for VOC dataset. I checked and it is not there in the official Detectron repo as well.
However, it should indeed possible to compute these for the VOC dataset, given that they have the segmentation tag. I will work on fixing this over this weekend.
Hey,
I found a work through but not sure if its correct! VOC 2007 annotations are indeed in JSON format (segmentation, bbox tags included). So, I considered them as a custom dataset. I added this line FORCE_JSON_DATASET_EVAL:TRUE
in the config line.
This simply gave the result as -1.000 for all box and Mask APs.
I found that there is this line; if json_dataset.name.find('test') == -1: coco_eval = _do_detection_eval(json_dataset, res_file, output_dir) else: coco_eval = None
in the json_dataset_evaluator.py file which will not allow box and mask evaluation if the name of file has "test" on it. So changed the name of my test image and annotation files to "val".
Then I ran the test_net.py. I got this output:
INFO json_dataset_evaluator.py: 122: Wrote json eval results to: /home/deep/data/asif/Detectron/Detectron.pytorch/Outputs/e2e_mask_rcnn_R-50-FPN_1x/Mar18-15-32-49_deeppc_step/test/segmentation_results.pkl INFO task_evaluation.py: 65: Evaluating segmentations is done! INFO task_evaluation.py: 180: copypaste: Dataset: custom_data_val INFO task_evaluation.py: 182: copypaste: Task: box INFO task_evaluation.py: 185: copypaste: AP,AP50,AP75,APs,APm,APl INFO task_evaluation.py: 186: copypaste: 0.1957,0.3516,0.1975,0.0316,0.1114,0.2491 INFO task_evaluation.py: 182: copypaste: Task: mask INFO task_evaluation.py: 185: copypaste: AP,AP50,AP75,APs,APm,APl INFO task_evaluation.py: 186: copypaste: 0.2001,0.3539,0.2019,0.0299,0.1098,0.2575
I hope I have the right intuition to do this! If you think I have made a mistake please let me know!
The APs are very less which makes me question the method. But, I also think it might be because I trained the network on COCO dataset and fine tuned the network on VOC2007.