AgeProgression icon indicating copy to clipboard operation
AgeProgression copied to clipboard

ValueError

Open yayagege opened this issue 6 years ago • 4 comments

Hi, when I run this code I got an ValueError, shown as below:

Traceback (most recent call last): File "main.py", line 129, in models_saving=args.models_saving File "/home/yang/Documents/code/CAAE/pytorch/model.py", line 417, in teach d_z_prior = self.Dz(z_prior) File "/home/yang/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/yang/Documents/code/CAAE/pytorch/model.py", line 96, in forward out = layer(out) File "/home/yang/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/yang/anaconda3/lib/python3.7/site-packages/torch/nn/modules/container.py", line 91, in forward input = module(input) File "/home/yang/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/yang/anaconda3/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 66, in forward exponential_average_factor, self.eps) File "/home/yang/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 1251, in batch_norm raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size)) ValueError: Expected more than 1 value per channel when training, got input size [1, 64]

Could this error be solved easily, since I didn't look through your full code so carefully. If it is a silly question, my fault ^_-

yayagege avatar Jan 14 '19 13:01 yayagege

  1. what's your command?
  2. I think there's a similar thread here
  3. try changing the batch size

mattans avatar Jan 14 '19 19:01 mattans

I just used default values and run tag v1.1 with the command: python main.py --mode train --input ./data/UTKFace --output ./results I still got the same error.

yayagege avatar Jan 21 '19 08:01 yayagege

its works fine but only with 128 as batch size

ArashHosseini avatar Jan 30 '19 18:01 ArashHosseini

I solved the problem by setting the 'drop_last' as True in DataLoader in model.py. i.e., train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, drop_last = True) valid_loader = DataLoader(dataset=valid_dataset, batch_size=batch_size, shuffle=False, drop_last = True).

Hope it helpful for you.

Actasidiot avatar May 09 '19 10:05 Actasidiot