DANet icon indicating copy to clipboard operation
DANet copied to clipboard

The robustness of model is not good

Open nankepan opened this issue 2 years ago • 3 comments

It seems that the robustness of model is not good. There is a big performance gap between models trained by the same code. Can this prolem be fixed by increase sample_per_class?

nankepan avatar May 22 '22 12:05 nankepan

The performance of different epochs in the same training session is different, and we choose the best epoch for evaluation. https://github.com/scutpaul/DANet/blob/f0bc57d9b2641c4dda9ce70e2c6f240ce2789069/train_DAN.py#L164 By setting the number of “sample_per_class”, you can adjust the number of training iterations for each epoch. You can also set random seeds to get different training results.

scutpaul avatar May 23 '22 07:05 scutpaul

Maybe you misunderstood my question. I mean, I trained 4 times with same code, and use 4 model_best.pth.tar to test. There is a big performance gap between 4 model_best.pth.tar

nankepan avatar May 25 '22 01:05 nankepan

  1. You can estimate the performance of the training model by checking the performance in the validation set for several training sessions.
  2. During testing, since each support set is a few samples randomly sampled for unseen categories, this can significantly affect the results. Even in the same training, the test results will be different due to the different support sets sampled for the same unseen category. Therefore, we evaluate the model by testing in k-folds and averaging multiple tests in each fold to minimize the fluctuation of test performance. My suggestion is to measure the performance of the model by the performance of the validation set and a complete test evaluation.

scutpaul avatar May 25 '22 02:05 scutpaul