pytorch-adda
pytorch-adda copied to clipboard
error about mnist data shape
when i run 'main.py', i got following error.
`Traceback (most recent call last):
File "Domain_Adaption/pytorch-adda/main.py", line 41, in
I also meet
when i run 'main.py', i got following error.
`Traceback (most recent call last):
File "Domain_Adaption/pytorch-adda/main.py", line 41, in src_encoder, src_classifier, src_data_loader) File "Domain_Adaption/pytorch-adda/core/pretrain.py", line 32, in train_src for step, (images, labels) in enumerate(data_loader): File "/envs//lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 615, in next batch = self.collate_fn([self.dataset[i] for i in indices]) File /envs//lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 615, in batch = self.collate_fn([self.dataset[i] for i in indices]) File "/envs//lib/python3.6/site-packages/torchvision/datasets/mnist.py", line 95, in getitem img = self.transform(img) File "/envs//lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 60, in call img = t(img) File "/envs//lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 163, in call return F.normalize(tensor, self.mean, self.std, self.inplace) File "/envs/*/lib/python3.6/site-packages/torchvision/transforms/functional.py", line 208, in normalize tensor.sub_(mean[:, None, None]).div_(std[:, None, None]) RuntimeError: output with shape [1, 28, 28] doesn't match the broadcast shape [3, 28, 28] `
I also meet the same problem。How do you solve it ?
Seems like same as #20 but with more details. Also, i'm facing the same error.
Seems like same as #20 but with more details. Also, i'm facing the same error.
you can use this solution。https://stackoverflow.com/questions/56033173/kmnist-runtimeerror-output-with-shape-1-28-28-doesnt-match-the-broadcast
@Dr-Zhou Do you mean, solution can be provided on Stackoverflow? OR the solution is already there? -- there are no answers for the Question.
you can see this。https://blog.csdn.net/weixin_43159148/article/details/88778371。but I also meet other problem
Downgrading torch and torchvision to 0.2.0 and 0.2.1 solved this issue for me.
transforms.Normalize([0.5],[0.5]) This way in my settings can work.