SSD icon indicating copy to clipboard operation
SSD copied to clipboard

RuntimeError: cannot perform reduction function max on tensor with no elements because the operation does not have an identity

Open wudi00 opened this issue 5 years ago • 8 comments

Original Traceback (most recent call last): File "/media/disk/wudi/.local/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/media/disk/wudi/.local/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/media/disk/wudi/.local/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/media/disk/wudi/SSD/ssd/data/datasets/voc.py", line 50, in __getitem__ boxes, labels = self.target_transform(boxes, labels) File "/media/disk/wudi/SSD/ssd/data/transforms/target_transform.py", line 21, in __call__ self.corner_form_priors, self.iou_threshold) File "/media/disk/wudi/SSD/ssd/utils/box_utils.py", line 89, in assign_priors best_target_per_prior, best_target_per_prior_index = ious.max(1) RuntimeError: cannot perform reduction function max on tensor with no elements because the operation does not have an identity I met this error in training. I can't find out what's wrong.

wudi00 avatar Dec 12 '19 03:12 wudi00

There is another kind of mistake that can occur in training. Traceback (most recent call last): File "train.py", line 116, in <module> main() File "train.py", line 107, in main model = train(cfg, args) File "train.py", line 46, in train model = do_train(cfg, model, train_loader, optimizer, scheduler, checkpointer, device, arguments, args) File "/media/disk/wudi/SSD/ssd/engine/trainer.py", line 74, in do_train for iteration, (images, targets, _) in enumerate(data_loader, start_iter): File "/media/disk/wudi/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 801, in __next__ return self._process_data(data) File "/media/disk/wudi/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 846, in _process_data data.reraise() File "/media/disk/wudi/.local/lib/python3.6/site-packages/torch/_utils.py", line 369, in reraise raise self.exc_type(msg) IndexError: Caught IndexError in DataLoader worker process 1. Original Traceback (most recent call last): File "/media/disk/wudi/.local/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/media/disk/wudi/.local/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/media/disk/wudi/.local/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/media/disk/wudi/SSD/ssd/data/datasets/voc.py", line 48, in __getitem__ image, boxes, labels = self.transform(image, boxes, labels) File "/media/disk/wudi/SSD/ssd/data/transforms/transforms.py", line 75, in __call__ img, boxes, labels = t(img, boxes, labels) File "/media/disk/wudi/SSD/ssd/data/transforms/transforms.py", line 383, in __call__ boxes[:, 0::2] = width - boxes[:, 2::-2] IndexError: too many indices for array I hope to get an answer. Thank you

wudi00 avatar Dec 13 '19 01:12 wudi00

I hope to get an answer,too

Linda-L avatar Mar 06 '20 13:03 Linda-L

I get the same problem when I'm training my own dataset. I organized my datasets (3 classes including the background) into the form of VOC2007, and I change the CLASSES_NUM from 21 to 3 in corresponding codes, but I still get the problem. Hope to get an anwser thx!

Think1ess avatar Mar 18 '20 02:03 Think1ess

I meet the problem too, @wudi00 @Linda-L @Think1ess ,do u solve this problem?

yustaub avatar Mar 27 '20 08:03 yustaub

I also faced this problem, has anybody solved the problem?

yukang123 avatar Jun 07 '20 01:06 yukang123

I met the same problem

yangbisheng2009 avatar Jul 02 '20 07:07 yangbisheng2009

I solve this prolem. If you use VOC data and keep difficult and just one single object, the error will occur. Because the object of the JPG will remove. And tenor will be None.

Just modify here: voc.py------------> line 18, change keep_difficult=False to keep_difficult=True

https://github.com/yangbisheng2009/simple-retinanet-pytorch I immplenment one a project. You can use it easier and the perfomance is much better. Welcom star.

yangbisheng2009 avatar Jul 02 '20 07:07 yangbisheng2009

I solve this prolem. If you use VOC data and keep difficult and just one single object, the error will occur. Because the object of the JPG will remove. And tenor will be None.

Just modify here: voc.py------------> line 18, change keep_difficult=False to keep_difficult=True

https://github.com/yangbisheng2009/simple-retinanet-pytorch I immplenment one a project. You can use it easier and the perfomance is much better. Welcom star.

朋友,我使用你的方法修改了,还是报相同的错误,请问有什么建议吗?

jhyscode avatar Oct 10 '21 16:10 jhyscode