pytorch-pruning
pytorch-pruning copied to clipboard
prune error:dimension out of range (expected to be in range of [-2, 1], but got 3)
error dimension out of range
values =
torch.sum((activation * grad), dim = 0).
sum(dim=2).sum(dim=3)[0, :, 0, 0].data
Hi! How did you solve it?
Hi! I am also getting the same error. Did you solve this? Thanks!!!
@longriyao @Kuldeep-Attri
pytorch version problem,this repository use pytorch 0.1
If you use pytorch 0.2,add param: keepDim=True
Thanks!!! @hewumars
Hi @hewumars After your suggestion, I have modified and used the following code
values = \
torch.sum((activation * grad), dim = 0, keepdim=True).\
sum(dim=2,keepdim=True).sum(dim=3,keepdim=True)[0, :, 0, 0].data
The next error I am getting in prune.py in line 33
RuntimeError: bool value of Variable objects containing non-empty torch.FloatTensor is ambiguous
Before that, I am getting following intermediate output
Accuracy : 0.98454429573
Number of pruning iterations to reduce 67% filters 5
Ranking filters..
Layers that will be prunned {0: 1, 2: 5, 5: 6, 7: 6, 10: 21, 12: 16, 14: 21, 17: 64, 19: 64, 21: 60, 24: 62, 26: 79, 28: 107}
Prunning filters..
Please suggest how can I fix this problem
the origin code use python2,if you use 3,please notice the difference between the two versions.
I do not know where your problem is.
Python 2.7.13 pytorch 0.2.0_4
I am writing all the massage which I have received while running python finetune.py --prune
/home/iab/anaconda2/envs/pytorch/lib/python2.7/site-packages/torchvision-0.1.9-py2.7.egg/torchvision/transforms/transforms.py:155: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
/home/iab/anaconda2/envs/pytorch/lib/python2.7/site-packages/torchvision-0.1.9-py2.7.egg/torchvision/transforms/transforms.py:390: UserWarning: The use of the transforms.RandomSizedCrop transform is deprecated, please use transforms.RandomResizedCrop instead.
Accuracy : 0.98454429573
Number of prunning iterations to reduce 67% filters 5
Ranking filters..
Layers that will be prunned {0: 1, 2: 5, 5: 6, 7: 6, 10: 21, 12: 16, 14: 21, 17: 64, 19: 64, 21: 60, 24: 62, 26: 79, 28: 107}
Prunning filters..
Traceback (most recent call last):
File "finetune.py", line 270, in <module>
fine_tuner.prune()
File "finetune.py", line 228, in prune
model = prune_vgg16_conv_layer(model, layer_index, filter_index)
File "/home/iab/Rohit/pytorch/filter_selection/ICLR2017/pytorch-pruning-master/prune.py", line 33, in prune_vgg16_conv_layer
bias = conv.bias)
File "/home/iab/anaconda2/envs/pytorch/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 250, in __init__
False, _pair(0), groups, bias)
File "/home/iab/anaconda2/envs/pytorch/lib/python2.7/site-packages/torch/nn/modules/conv.py", line 34, in __init__
if bias:
File "/home/iab/anaconda2/envs/pytorch/lib/python2.7/site-packages/torch/autograd/variable.py", line 123, in __bool__
torch.typename(self.data) + " is ambiguous")
RuntimeError: bool value of Variable objects containing non-empty torch.FloatTensor is ambiguous
Is it because of ambiguous value in Layers that will be prunned?
@RohitKeshari
#hw modify
is_bias_present = False
if conv.bias is not None:
is_bias_present = True
new_conv =
torch.nn.Conv2d(in_channels = conv.in_channels,
out_channels = conv.out_channels - 1,
kernel_size = conv.kernel_size,
stride = conv.stride,
padding = conv.padding,
dilation = conv.dilation,
groups = conv.groups,
bias = is_bias_present)
@RohitKeshari step debug and print variable can be solve the problem , because your input param is error.
Thanks @hewumars, in place conv.bias It should have True. It works for me
Hi,I run the python finetune.py --train but the output accuracy is always 1.0 and python finetune.py --prune it shows: Accuracy : 1.0 Number of prunning iterations to reduce 67% filters 5 can you help me ?
@RohitKeshari where did you add the modify code?
I added it:
if not next_conv is None:
next_conv.bias = False
if conv.bias is not None:
next_conv.bias = True
next_new_conv =
torch.nn.Conv2d(in_channels = next_conv.in_channels - 1,
out_channels = next_conv.out_channels,
kernel_size = next_conv.kernel_size,
stride = next_conv.stride,
padding = next_conv.padding,
dilation = next_conv.dilation,
groups = next_conv.groups,
bias = next_conv.bias)
did not solve the problem
@tearhupo121031 what is your problem? please post the error message here. Are you using the same database? If you are using a different database then it might be the case that database is easy.
sorry,I have the same problem, I added it:
next_new_conv =
torch.nn.Conv2d(in_channels = next_conv.in_channels - 1,
out_channels = next_conv.out_channels,
kernel_size = next_conv.kernel_size,
stride = next_conv.stride,
padding = next_conv.padding,
dilation = next_conv.dilation,
groups = next_conv.groups,
bias = True)

just delete the last line to this:
if not next_conv is None: next_conv.bias = False if conv.bias is not None: next_conv.bias = True next_new_conv = torch.nn.Conv2d(in_channels = next_conv.in_channels - 1, out_channels = next_conv.out_channels, kernel_size = next_conv.kernel_size, stride = next_conv.stride, padding = next_conv.padding, dilation = next_conv.dilation, groups = next_conv.groups)
@MrLinNing
Python 3 model.features._modules.items() -> it returns a generator and you cannot use index
Python 2.7 model.features._modules.items() -> it returns a list and you can use index
You can either use Python 2.7 or list(model.features._modules.items())