yolact icon indicating copy to clipboard operation
yolact copied to clipboard

RuntimeError : The expanded size of the tensor

Open MyoungHaSong opened this issue 6 years ago • 3 comments

first of all, thank you for sharing code. There was a problem running the code.

root@ce7bd19200ca:/workspace/yolact# python train.py --config=yolact_base_config --batch_size=5 Multiple GPUs detected! Turning off JIT. loading annotations into memory... Done (t=11.95s) creating index... index created! loading annotations into memory... Done (t=1.59s) creating index... index created! Initializing weights... Begin training!

Traceback (most recent call last): File "train.py", line 382, in train() File "train.py", line 257, in train losses = criterion(out, wrapper, wrapper.make_mask()) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply raise output File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker output = module(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/workspace/yolact/layers/modules/multibox_loss.py", line 141, in forward pos_idx = pos.unsqueeze(pos.dim()).expand_as(loc_data) RuntimeError: The expanded size of the tensor (19248) must match the existing size (14436) at non-singleton dimension 1. Target sizes: [2, 19248, 4]. Tensor sizes: [2, 14436, 1]

what should I do?

MyoungHaSong avatar Sep 09 '19 06:09 MyoungHaSong

Not sure if this is the issue, but if you're using multiple GPUs, you should use a batch size that's evenly divisible by the number of GPUs you're using. The first line says you're using multiple GPUs, so let me know if that works.

dbolya avatar Sep 09 '19 06:09 dbolya

Hi @dbolya, im experiencing this as well, it happens only when I try to set the pred_aspect_ratios to have different number of pred_aspect_ratio at each scales as follow: 'pred_aspect_ratios': [ [[1]], [[1/2, 2]], [[1/2, 2]], [[1]] ] is setting like this not allowed?

maxmx911 avatar Oct 03 '21 06:10 maxmx911

same to me , I change make_priors in multibox_loss.py , now it works!

gaoyanearth avatar Feb 15 '23 10:02 gaoyanearth