lightweight-neural-architecture-search
lightweight-neural-architecture-search copied to clipboard
Questions about reproducing results
Thanks for sharing the code of such an interesting work! Just got a few questions when reproducing the code:
-
The links to the pre-trained image classification models (RXX-like.pth) seem like broken. Do you have any backup links for them?
-
Will you plan to release the code for training these image classification models (RXX-like.pth) by any chance?
-
I followed the instructions to deploy the pre-trained object detection models on GFL-V2; I was able to reproduce the similar results for maedet-s (box_mAP: 0.451) and maedet-l (box_mAP: 0.478) as reported. However, the results of maedet-m were quite unusual:
{‘box_mAP’: 0.039, ‘box_mAP_50’: 0.067, ‘box_mAP_75’: 0.039, ‘box_mAP_s’: 0.019, ‘box_mAP_m’: 0.048, ‘box_mAP_i’: 0.057, ‘box_mAP_copypast’: '0.039 0.067 0.039 0.019 0.048 0.057'}
When you get a chance, could you please help verify if you get the same outputs for maedet-m?
- If I understood correctly, the maedet models are searched based on this score function, which includes the variance of output feature map (as presented in Equation (4) of the MAEDET paper). However, I am sure if I understand the part regarding the stage channels - which seems not included in the original paper. Could you please provide some insights into it? Or were the pre-trained maedet models searched based on a different score function?
Thank you very much!