DeepHyperX icon indicating copy to clipboard operation
DeepHyperX copied to clipboard

Validation accuracy is incorrect when using fully convolutional models

Open nshaud opened this issue 4 years ago • 4 comments

nshaud avatar Nov 16 '20 12:11 nshaud

Hi, which one is you refering to? I would like to try it.

mengxue-rs avatar Nov 20 '20 07:11 mengxue-rs

@snowzm you can try lee model which is fully convolutional IIRC. Validation accuracy is grossly incorrect in this case.

nshaud avatar Nov 20 '20 11:11 nshaud

@nshaud I think this issue may relate to your choice of the normalization method. More details see Experiment Reports.pdf

mengxue-rs avatar Nov 26 '20 07:11 mengxue-rs

@nshaud there may be two reasons about this issue.

  1. in the line 1225 of the val function of the models.py, if out.item() in ignored_labels: may be corrected as if pred.item() in ignored_labels:. This results in the incorrect calculation of the validation accuracy;
  2. wrongly used normalization makes the training loss being still high, you could try the SNB normalization (see my recent pulled request). There are three experimental pictures about above statements (10% samples per class on the Indian Pines data set). When using original settings, I got the first row images. Then I got the second row images if I did 1). Finally, I got the last row images if I did 1) and 2).

1 2 3

mengxue-rs avatar Nov 30 '20 10:11 mengxue-rs