pytorch-YOLOv4
pytorch-YOLOv4 copied to clipboard
yolov4-tiny Error when training
I have changed the cfg for the correct number of classes, filters, max batches, and steps
in the yolo loss function I have changed:
- image_size = 416
- self.strides = [32, 16]
- self.anchors = [[10,14], [23,27], [37,58], [81,82], [135,169], [344,319]]
- self.anch_masks = [[3, 4, 5], [0, 1, 2]]
- the for loop to: for i in range(len(self.strides))
if self.n_anchors is set to 2, I get the error:
File "train.py", line 648, in <module>
train(model=model,
File "train.py", line 394, in train
loss, loss_xy, loss_wh, loss_obj, loss_cls, loss_l2 = criterion(bboxes_pred, bboxes)
File "/home/hanna/.pyenv/versions/3.8-dev/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "train.py", line 258, in forward
output = output.view(batchsize, self.n_anchors, n_ch, fsize, fsize)
RuntimeError: shape '[64, 2, 6, 13, 13]' is invalid for input of size 194688
if the self.n_anchors is set to 3, then it can get all the way to the evaluate function but the hits this error:
File "train.py", line 648, in <module>
train(model=model,
File "train.py", line 438, in train
evaluator = evaluate(eval_model, val_loader, config, device)
File "/home/hanna/.pyenv/versions/3.8-dev/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "train.py", line 493, in evaluate
outputs = model(model_input)
File "/home/hanna/.pyenv/versions/3.8-dev/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/hanna/Gitlab/yolov4/tool/darknet2pytorch.py", line 218, in forward
boxes = self.models[ind](x)
File "/home/hanna/.pyenv/versions/3.8-dev/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/hanna/Gitlab/yolov4/tool/yolo_layer.py", line 321, in forward
return yolo_forward_dynamic(output, self.thresh, self.num_classes, masked_anchors, len(self.anchor_mask),scale_x_y=self.scale_x_y)
File "/home/hanna/Gitlab/yolov4/tool/yolo_layer.py", line 185, in yolo_forward_dynamic
det_confs = det_confs.view(output.size(0), num_anchors * output.size(2) * output.size(3))
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
Any help would be greatly appreciated
I have the same problem, have you solved it yet?
@Pigdrum Unfortunately no, I ended up just using https://github.com/AlexeyAB/darknet for yolov4 tiny
maybe should change yolov4-tiny.cfg or change cfg.py
example :
filters=(your class num+5)*3
For the error in the second case you have mentioned -
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces).
Solution: Replace all the view() command with reshape() command. It works in my case.