book_DeepLearning_in_PyTorch_Source
book_DeepLearning_in_PyTorch_Source copied to clipboard
bug found
in "手写数字识别器_minst_convnet.ipynb" in def forward() x = F.log_softmax(x, dim = 0) #输出层为log_softmax,即概率对数值log(p(x))。采用log_softmax可以使得后面的交叉熵计算更快 here the dim should be 1 because we see dim=0 as per data line
And I also feel that in the same project We use CrossEntropyLoss but the last layer of our network is log_softmax(). This may be not right