QA-FewDet
QA-FewDet copied to clipboard
Is there something wrong with the original cfg file, please?
Command: sh scripts/meta_training_pascalvoc_split1_resnet101.sh
ValueError: Milestone must be smaller than total number of updates: num_updates=10000, milestone=10000
version: 0.5
cfg File: SOLVER: IMS_PER_BATCH: 4 BASE_LR: 0.002 STEPS: (15000, 20000) MAX_ITER: 20000 CHECKPOINT_PERIOD: 10000
Has the author encountered this problem?
Environment: Ubuntu18+torch1.8+cuda11.0 detetron2-v0.5
Hi, The default cfg files work well on my local machine. It seems that the error might be related to the software version. Actually I used an old version of pytorch (1.6.0) and detectron (0.2.1) and have not tested our codes under other versions. Hope this will be useful to you. Guangxing
Hi, The default cfg files work well on my local machine. It seems that the error might be related to the software version. Actually I used an old version of pytorch (1.6.0) and detectron (0.2.1) and have not tested our codes under other versions. Hope this will be useful to you. Guangxing
OK. I also asked the same question on Detectron2, maybe I need to find other way to solve this problem. By the way, Is the other paper (Meta Faster R-CNN) also the same configuration and environment?
OK. I also asked the same question on Detectron2, maybe I need to find other way to solve this problem. By the way, Is the other paper (Meta Faster R-CNN) also the same configuration and environment?
Yes, we used the same environment in the two repos.
OK. I also asked the same question on Detectron2, maybe I need to find other way to solve this problem. By the way, Is the other paper (Meta Faster R-CNN) also the same configuration and environment?
Yes, we used the same environment in the two repos.
Thank you for your reply and good luck with your work.
Excuse me, I have another problem: What's the difference between fsod_train_net_fewx.py and fsod_train_net.py? They look so similar, can I keep only one line of commands in meta_training_pascalvoc_split1_resnet101.sh?
Excuse me, I have another problem: What's the difference between fsod_train_net_fewx.py and fsod_train_net.py? They look so similar, can I keep only one line of commands in meta_training_pascalvoc_split1_resnet101.sh?
Both of the two scripts are crucial.
We first use fsod_train_net_fewx.py to train the baseline model following this repo FewX, which is reorganized in our fewx module.
Then we add the proposed heterogeneous GCNs and use fsod_train_net.py to train the whole model, which is defined in our QA_FewDet module.
The two modules fewx and QA_FewDet are different, and the two-step meta-training is crucial for our final performance. If we only use the QA_FewDet module, the training is unstable.
Excuse me, I have another problem: What's the difference between fsod_train_net_fewx.py and fsod_train_net.py? They look so similar, can I keep only one line of commands in meta_training_pascalvoc_split1_resnet101.sh?
Both of the two scripts are crucial.
We first use fsod_train_net_fewx.py to train the baseline model following this repo FewX, which is reorganized in our fewx module.
Then we add the proposed heterogeneous GCNs and use fsod_train_net.py to train the whole model, which is defined in our QA_FewDet module.
The two modules fewx and QA_FewDet are different, and the two-step meta-training is crucial for our final performance. If we only use the QA_FewDet module, the training is unstable.
Thank you for your excellent work and your reply.