weiaicunzai
weiaicunzai
> **RuntimeError: The size of tensor a (100) must match the size of tensor b (32) at non-singleton dimension 3** Could you plz tell me you are implementing ResNet18 yourself...
> [128 x 8192], m2: [512 x 4096] VGG does not have adaptive pooling layer, so you have to modify the fully connected layer in VGG16 to adapt your dataset...
If your dataset is a image classification dataset, you could implement your own dataset class to read your data. see more details here: https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset
I've just updated my code, fixed this bug. I've tested my updated code on Google Colab python3.6 pytorch1.6 a K80 gpu: ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.57 Driver Version: 418.67 CUDA...
> I found that googlenet.py also occupies so many gpu memory that when I train it on ImageNet dataset , even 4 gpus with 20GB per gpu are not enough....
> > > I found that googlenet.py also occupies so many gpu memory that when I train it on ImageNet dataset , even 4 gpus with 20GB per gpu are...
> > > I found that googlenet.py also occupies so many gpu memory that when I train it on ImageNet dataset , even 4 gpus with 20GB per gpu are...
According to paper: ``Deep Residual Learning for Image Recognition``: >So we use 0.01 to warm up the training until the training error is below 80% (about 400 iterations), and then...
请问你能告诉我如何复现这个bug吗?我从来没遇到过这个bug。而且我的代码也更新了不少,你可以试试更新下代码再运行,看是否还有这个bug。
I've never trained torchvision's resnet18 on cifar100, but your question is very similar to this one #22 , and you could see my answer to his question: https://github.com/weiaicunzai/pytorch-cifar100/issues/22#issuecomment-667652225. Hope this...