Vanessa Sidrim
Vanessa Sidrim
I performed the conversion of my database to coco format, I managed to execute the sp-nas but the training and validation results (mAP and AP) are zeroed. Is it necessary...
attached the logs: [fine_tune_worker_0.log](https://github.com/huawei-noah/vega/files/8844306/fine_tune_worker_0.log) [parallel_worker_1.log](https://github.com/huawei-noah/vega/files/8844307/parallel_worker_1.log) [parallel_worker_2.log](https://github.com/huawei-noah/vega/files/8844308/parallel_worker_2.log) [parallel_worker_3.log](https://github.com/huawei-noah/vega/files/8844309/parallel_worker_3.log) [parallel_worker_4.log](https://github.com/huawei-noah/vega/files/8844310/parallel_worker_4.log) [parallel_worker_5.log](https://github.com/huawei-noah/vega/files/8844311/parallel_worker_5.log) [parallel_worker_6.log](https://github.com/huawei-noah/vega/files/8844312/parallel_worker_6.log) [parallel_worker_7.log](https://github.com/huawei-noah/vega/files/8844313/parallel_worker_7.log) [parallel_worker_8.log](https://github.com/huawei-noah/vega/files/8844314/parallel_worker_8.log) [parallel_worker_9.log](https://github.com/huawei-noah/vega/files/8844315/parallel_worker_9.log) [parallel_worker_10.log](https://github.com/huawei-noah/vega/files/8844316/parallel_worker_10.log) [parallel_worker_11.log](https://github.com/huawei-noah/vega/files/8844317/parallel_worker_11.log) [parallel_worker_12.log](https://github.com/huawei-noah/vega/files/8844318/parallel_worker_12.log) [parallel_worker_13.log](https://github.com/huawei-noah/vega/files/8844319/parallel_worker_13.log) [parallel_worker_14.log](https://github.com/huawei-noah/vega/files/8844320/parallel_worker_14.log) [parallel_worker_15.log](https://github.com/huawei-noah/vega/files/8844321/parallel_worker_15.log) [parallel_worker_16.log](https://github.com/huawei-noah/vega/files/8844322/parallel_worker_16.log) [parallel_worker_17.log](https://github.com/huawei-noah/vega/files/8844323/parallel_worker_17.log) [parallel_worker_18.log](https://github.com/huawei-noah/vega/files/8844324/parallel_worker_18.log) [parallel_worker_19.log](https://github.com/huawei-noah/vega/files/8844325/parallel_worker_19.log) [parallel_worker_20.log](https://github.com/huawei-noah/vega/files/8844326/parallel_worker_20.log) [parallel_worker_21.log](https://github.com/huawei-noah/vega/files/8844327/parallel_worker_21.log) [pipeline.log](https://github.com/huawei-noah/vega/files/8844328/pipeline.log) [reignition_worker_8.log](https://github.com/huawei-noah/vega/files/8844329/reignition_worker_8.log) [reignition_worker_15.log](https://github.com/huawei-noah/vega/files/8844330/reignition_worker_15.log) [serial_worker_1.log](https://github.com/huawei-noah/vega/files/8844331/serial_worker_1.log) [serial_worker_2.log](https://github.com/huawei-noah/vega/files/8844332/serial_worker_2.log)...
I ran with this configuration and got the following error: `Unexpected key(s) in state_dict for conversion: roi_heads.box_predictor.cls_score.weight torch.Size([91, 1024]) --> roi_heads.box_predictor.cls_score .weight torch.Size([1, 1024])`
same error occurred after changes `Unexpected key(s) in state_dict for conversion: roi_heads.box_predictor.cls_score.weight torch.Size([91, 1024]) --> roi_heads.box_predictor.cls_score .weight torch.Size([1, 1024])`
I managed to run but the results at all times are `current valid perfs [mAP: -1.000, AP_small: -1.000, AP_medium: -1.000, AP_large: -1.000], best valid perfs [mAP: -1.000, AP_small: -1.000, AP_medium:...
this is the return of finetune phase execution
Could you tell me if the segmentation values impact the calculation of these metrics? As my dataset was in VOC format I performed the conversion to COCO format and this...