yolov9
yolov9 copied to clipboard
RuntimeError: shape '[145, 145, -1]' is invalid for input of size 928000
xi.view(feats[0].shape[0], self.no, -1) is resulting in a shape that is inconsistent with the expected shape [145, 145, -1]
Traceback (most recent call last):
File "train_dual.py", line 644, in <module>
main(opt)
File "train_dual.py", line 538, in main
train(opt.hyp, opt, device, callbacks)
File "train_dual.py", line 315, in train
loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size
File "/home/raptor1/Downloads/archive(1)/yolov9/utils/loss_tal_dual.py", line 175, in __call__
pred_distri, pred_scores = torch.cat([xi.view(feats[0].shape[0], self.no, -1) for xi in feats], 2).split(
File "/home/raptor1/Downloads/archive(1)/yolov9/utils/loss_tal_dual.py", line 175, in <listcomp>
pred_distri, pred_scores = torch.cat([xi.view(feats[0].shape[0], self.no, -1) for xi in feats], 2).split(
RuntimeError: shape '[145, 145, -1]' is invalid for input of size 928000
Input size should be multiples of 64.
Input size should be multiples of 64.
What do you mean by that? Input Image?
Can we not use image of any for training? If not let me know how to make it happen
@WongKinYiu
feats = p[1][0] if isinstance(p, tuple) else p[0]
feats2 = p[1][1] if isinstance(p, tuple) else p[1]
original_size = feats[0].shape[0]
new_size = math.ceil(original_size / 64) * 64
like this?? Feats shape: torch.Size([145, 8, 8]) Original size: 145 New size: 192
I think it because of your yaml model. what did you use? Please check the yolov9-c.yaml
if you check it then you can see dualDetect branch at the bottom of the yolov9-c.yaml ( [[31, 34, 37, 16, 19, 22], 1, DualDDetect, [nc]], # DualDDetect(A3, A4, A5, P3, P4, P5) ) so train_dual.py (line 315) will have two list. pred[0] -->3 heads pred[1] --> 3 heads
and other one, for example, gelan-c.yaml you can not find dual branch. there is only single branch ([[15, 18, 21], 1, DDetect, [nc]], # DDetect(P3, P4, P5)) so train.py (line 303) will have only one list. pred --> 3 heads
so if you want to run like gelan-c.yaml then you shoud use train.py not train_dual.py
I am working on segmentation use case. I am using the corresponding yaml file. But still getting the same error
Thanks @Kimyuhwanpeter
@anjineyulutv Using train_dual.py and yolov9-c.yaml like dual branch right? how about the input size? Could you try "Input size should be multiples of 64." said @WongKinYiu
I am not able to understand what you mean by "Using train_dual.py and yolov9-c.yaml like dual branch right?"
Ok will want the model to accept any images. Want a clarification if it would be just a code tweak or if the model is not good at adapting to it?
@anjineyulutv Sorry, I mean
- Select, if you want to train 'dual model'(train_dual.py) or 'single'(train.py)
- If 'dual model' - please check yaml file for model such as yolov9-c.yaml which is written dualDetect in the file
- if 'single model' - please check yaml file for model such as glean-c.yaml is written DDeect or something (you should check the model architecture by yourself).
- if you check, then please see loss_tal_dual.py and find def call() in ComputeLoss. Next, you should debug the if output of the model is listed single or double. if your output have one list for output(only 3 heads) and if the code is connected to loss_tal_dual.py then you should change model to dual model. (these errors occur in ' pred_distri, pred_scores = torch.cat([xi.view(feats[0].shape[0], self.no, -1) for xi in feats], 2).split((self.reg_max * 4, self.nc), 1) '
so you should first check that pred from model(img) is consist of one or two list.
I appreciate the detailed effort to explain!!!.There is no such keyword in yaml file of yolov7-af-seg.yaml.And got your intuition and there no such operator with dual prefix. Thus I am goin with train.py. I assume this holds for all the models as standard convention for yolov9
For training yolov7-af-seg.yaml
, please use segment/train.py
.
Thanks,where can I find yolo-seg.pt