deep-head-pose icon indicating copy to clipboard operation
deep-head-pose copied to clipboard

RuntimeError: Mismatch in shape

Open Algabri opened this issue 2 years ago • 1 comments

I am trying to run train_hopenet.py

python3 train_hopenet.py --dataset AFLW2000 --data_dir datasets/AFLW2000 --filename_list datasets/AFLW2000/files.txt --output_string er

I got this error:

Loading data.

/home/redhwan/.local/lib/python3.8/site-packages/torch/optim/adam.py:90: UserWarning: optimizer contains a parameter group with duplicate parameters; in future, this will cause an error; see github.com/pytorch/pytorch/issues/40967 for more information
  super(Adam, self).__init__(params, defaults)
Ready to train network.
Traceback (most recent call last):
  File "train_hopenet.py", line 193, in <module>
    torch.autograd.backward(loss_seq, grad_seq)
  File "/home/redhwan/.local/lib/python3.8/site-packages/torch/autograd/__init__.py", line 166, in backward
    grad_tensors_ = _make_grads(tensors, grad_tensors_, is_grads_batched=False)
  File "/home/redhwan/.local/lib/python3.8/site-packages/torch/autograd/__init__.py", line 50, in _make_grads
    raise RuntimeError("Mismatch in shape: grad_output["
RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([1]) and output[0] has a shape of torch.Size([]).


How can I solve it?

Note: torch.version = 1.12.0+cu102

Algabri avatar Dec 19 '22 04:12 Algabri

I changed this line:

grad_seq = [torch.ones(1).cuda(gpu) for _ in range(len(loss_seq))]

To be:

grad_seq = [torch.tensor(1, dtype=torch.float).cuda(gpu) for _ in range(len(loss_seq))]

It is working fine now.

Algabri avatar Dec 20 '22 05:12 Algabri