DeepHyperX
DeepHyperX copied to clipboard
Validation accuracy is incorrect when using fully convolutional models
Hi, which one is you refering to? I would like to try it.
@snowzm you can try lee
model which is fully convolutional IIRC. Validation accuracy is grossly incorrect in this case.
@nshaud I think this issue may relate to your choice of the normalization method. More details see Experiment Reports.pdf
@nshaud there may be two reasons about this issue.
- in the line 1225 of the val function of the models.py,
if out.item() in ignored_labels:
may be corrected asif pred.item() in ignored_labels:
. This results in the incorrect calculation of the validation accuracy; - wrongly used normalization makes the training loss being still high, you could try the SNB normalization (see my recent pulled request). There are three experimental pictures about above statements (10% samples per class on the Indian Pines data set). When using original settings, I got the first row images. Then I got the second row images if I did 1). Finally, I got the last row images if I did 1) and 2).