code-of-learn-deep-learning-with-pytorch icon indicating copy to clipboard operation
code-of-learn-deep-learning-with-pytorch copied to clipboard

This is code of book "Learn Deep Learning with PyTorch"

Results 26 code-of-learn-deep-learning-with-pytorch issues
Sort by recently updated
recently updated
newest added

@L1aoXingyu 作者你好,非常感谢您的分享。这里我提出一点小小的意见。 权重衰减可以通过 `torch.optim.lr_scheduler` 的相关类如 `ExponentialLR`完成,从而使权重衰减和训练逻辑分离。作权重衰减相关介绍的时候可以这么来介绍(实际使用也是这么使用比较广泛)。 https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate

net.py文件中的这段: vgg = models.vgg19(pretrained=True) self.feature = nn.Sequential(*list(vgg.children())[:-1]) self.feature.add_module('global average', nn.AvgPool2d(9)) 前两行代码的输出尺寸是7*7,而第三行平均池化的卷积核是9,运行后会出现这样的错误:Given input size: (512x7x7). Calculated output size: (512x0x0). Output size is too small 我的pytorch版本是1.6,是不是版本的问题?但按理说,Pytorch版本的迭代不会影响VGG这种经典网络的参数啊。这到底是怎么回事呢?

第四个代码块"数据预处理"部分,最后一行,标准化不应该是(x-min_value)/(scalar)吗,为什么您的代码里没有减min_value

![image](https://user-images.githubusercontent.com/71718032/118803933-94916200-b8d6-11eb-80a8-ea5bb0a83fed.png) tfs.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) is wrong and should be revised **tfs.Normalize([0.5], [0.5])**

# 画出参数更新之前的结果 w0 = w[0].data[0] w1 = w[1].data[0] b0 = b.data[0] plot_x = np.arange(0.2, 1, 0.01) plot_y = (-w0 * plot_x - b0) / w1 # 上面会导致乘法不能正常操作, 应该讲w0,w1,w2转换成numpy array 才能调通

期待新版书。gluon的教程确实深入浅出,讲的细致,感觉能提升代码力~~~

change: acc = (mask == y_data).sum().data[0] / y_data.shape[0] if (e + 1) % 200 == 0: print('epoch: {}, Loss: {:.5f}, Acc: {:.5f}'.format(e+1, loss.data[0], acc)) ) to acc = (mask ==...