Computer_Vision_Project
Computer_Vision_Project copied to clipboard
RuntimeError: shape '[4, 3, 6, 11, 11]' is invalid for input of size 7260
please,help me
Could you put a screenshot with error
Namespace(batch_size=4, checkpoint_interval=1, compute_map=False, data_config='config/custom.data', epochs=100, evaluation_interval=1, gradient_accumulations=2, img_size=416, model_def
='config/yolov3-custom.cfg', multiscale_training=True, n_cpu=8, pretrained_weights='weights/darknet53.conv.74')
Traceback (most recent call last):
File "train.py", line 107, in
loss, outputs = model(imgs, targets)
File "D:\Anaconda\Anaconda_3\lib\site-packages\torch\nn\modules\module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "D:\pytoch\PyTorch-YOLOv3-master (3)\PyTorch-YOLOv3-master\models.py", line 269, in forward
x, layer_loss = module[0](x, targets, img_dim)
File "D:\Anaconda\Anaconda_3\lib\site-packages\torch\nn\modules\module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "D:\pytoch\PyTorch-YOLOv3-master (3)\PyTorch-YOLOv3-master\models.py", line 153, in forward
x.view(num_samples, self.num_anchors, self.num_classes + 5, grid_size, grid_size)
RuntimeError: shape '[4, 3, 6, 11, 11]' is invalid for input of size 7260
this is my error!
The input image size should be 32*n, for example 64, 128, 448 etc
I useing datasets name is DETRAC,image size is 960*540,You mean I change the size of the picture?
You can try this image size: 960*544. Because of 544/32=17.
I changed the image size to 960*544, batch_size=2(I only have two pictures in my training folder, try to see if I can run through), but the problem is still this error
check you yolov3-custom.cfg file The error is 4361111 not equal 7260
---Original--- From: "AadeIT"<[email protected]> Date: 2019/12/14 23:32:38 To: "FLyingLSJ/Computer_Vision_Project"<[email protected]>; Cc: "Comment"<[email protected]>;"Leong"<[email protected]>; Subject: Re: [FLyingLSJ/Computer_Vision_Project] RuntimeError: shape '[4, 3, 6, 11, 11]' is invalid for input of size 7260 (#3)
Namespace(batch_size=4, checkpoint_interval=1, compute_map=False, data_config='config/custom.data', epochs=100, evaluation_interval=1, gradient_accumulations=2, img_size=416, model_def ='config/yolov3-custom.cfg', multiscale_training=True, n_cpu=8, pretrained_weights='weights/darknet53.conv.74') Traceback (most recent call last): File "train.py", line 107, in loss, outputs = model(imgs, targets) File "D:\Anaconda\Anaconda_3\lib\site-packages\torch\nn\modules\module.py", line 541, in call result = self.forward(*input, **kwargs) File "D:\pytoch\PyTorch-YOLOv3-master (3)\PyTorch-YOLOv3-master\models.py", line 269, in forward x, layer_loss = module[0](x, targets, img_dim) File "D:\Anaconda\Anaconda_3\lib\site-packages\torch\nn\modules\module.py", line 541, in call result = self.forward(*input, **kwargs) File "D:\pytoch\PyTorch-YOLOv3-master (3)\PyTorch-YOLOv3-master\models.py", line 153, in forward x.view(num_samples, self.num_anchors, self.num_classes + 5, grid_size, grid_size) RuntimeError: shape '[4, 3, 6, 11, 11]' is invalid for input of size 7260
this is my error!
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
There's no way to think about these parameters, so let me think about it again. Thank you
please,help me
Hi,I got the same error,did you solve it?
@xiaoyuzhu666 try,change your filters of config file
Thanks,but I have changed it, the error still exist(filters=18,class=1)😭
---Original--- From: "AadeIT"<[email protected]> Date: Wed, Apr 1, 2020 16:33 PM To: "FLyingLSJ/Computer_Vision_Project"<[email protected]>; Cc: "xiaoyuzhu666"<[email protected]>;"Mention"<[email protected]>; Subject: Re: [FLyingLSJ/Computer_Vision_Project] RuntimeError: shape '[4, 3, 6, 11, 11]' is invalid for input of size 7260 (#3)
@xiaoyuzhu666 try,change your filters of config file
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
@xiaoyuzhu666
filters of config file
A total of three
find [yolo]
[convolutional] size=1 stride=1 pad=1 filters=255 <---------------------- change this filters ,filters = (classes+5)*3 activation=linear
[yolo] mask = 0,1,2 anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326 classes=80 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 random=1
@xiaoyuzhu666 That's how I solved my error
@xiaoyuzhu666 filters of config file A total of three find [yolo]
[convolutional] size=1 stride=1 pad=1 filters=255 <---------------------- change this filters ,filters = (classes+5)*3 activation=linear
[yolo] mask = 0,1,2 anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326 classes=80 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 random=1
I don‘t know if it is related to “anchors”: [convolutional] size=1 stride=1 pad=1 filters=18 activation=linear
[yolo] mask = 0 *********0/1/2 anchors = 65,65, 97,97, 162,162 classes=1 num=3 jitter=.3 ignore_thresh = .5 truth_thresh = 1 random=1
@xiaoyuzhu666 if you have 3 class ,you can try this cfg!
# Testing
#batch=1
#subdivisions=1
# Training
batch=16
subdivisions=1
width=416
height=416
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1
learning_rate=0.001
burn_in=1000
max_batches = 500200
policy=steps
steps=400000,450000
scales=.1,.1
[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky
# Downsample
[convolutional]
batch_normalize=1
filters=64
size=3
stride=2
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=32
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
# Downsample
[convolutional]
batch_normalize=1
filters=128
size=3
stride=2
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
# Downsample
[convolutional]
batch_normalize=1
filters=256
size=3
stride=2
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
# Downsample
[convolutional]
batch_normalize=1
filters=512
size=3
stride=2
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
# Downsample
[convolutional]
batch_normalize=1
filters=1024
size=3
stride=2
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
######################
[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters=18 # filters = (classes+5)*3
activation=linear
[yolo]
mask = 6,7,8
anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
classes=1
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
[route]
layers = -4
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[upsample]
stride=2
[route]
layers = -1, 61
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters=18 # filters = (classes+5)*3
activation=linear
[yolo]
mask = 3,4,5
anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
classes=1
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
[route]
layers = -4
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[upsample]
stride=2
[route]
layers = -1, 36
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters=18 # filters = (classes+5)*3
activation=linear
[yolo]
mask = 0,1,2
anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
classes=1
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
@xiaoyuzhu666 Has nothing to do with the anchor
@xiaoyuzhu666 Has nothing to do with the anchor
I changed “anchors”and “num” using k-means clustering,,,the number of anchor is 3(not 9) [yolo] mask = 0 anchors = 65,65, 97,97, 162,162 classes=1 num=3
@xiaoyuzhu666 I think you useing the default anchors K-means clustering in yolo v2 ,not yolov3
@xiaoyuzhu666 I think you useing the default anchors K-means clustering in yolo v2 ,not yolov3
0.0 Oh!!!Thank you!