repulsion_loss_ssd
repulsion_loss_ssd copied to clipboard
a doubt about "Line 97: priors = priors[pos_idx].view(-1, 4)" in multibox_loss.py
hi, i have a doubt, such as the title. the shape of priors is (num_priors, 4), and pos_idx is (num, num_priors, 4). this code how to run success? thank u!
@AllenMao I am confused with that problem,too. It works well because both of 'priors' and 'pos_idx' are Variable. If they are tensor, it will fail to run success. Besides, I also find that this code only works under <0.4.0 version. I tried with pytorch-0.3.1, it works well. @bailvwangzi what's your opinion?
@huangzsdy: I got the same problem not because they are tensor. It is because of dimensions are not appropriate. For example when running on PASCAL priors: [8732, 4] pos_idx: [32, 8732, 4] How can priors[pos_idx] work no matter Variable or Tensor? If you said it works with Variable, can I like this? va_pos_idx = Variable(pos_idx) va_priors = Variable(priors) va_priors = va_priors[va_pos_idx].view(-1, 4) priors = torch.Tensor(va_priors)
Thank you
@huangzsdy i used pytoch-0.3.1.
@lthngan My environment is Python 3.6.2 :: Anaconda custom (64-bit), pytorch0.3.1.post2. It works well.
When the program run to this code, the type of priors or pos_idx is Variable, not tensor.
If you repeat these operations in python shell, it will be clear.
like this:
#>>> torch.version
'0.3.1.post2'
#>>> priors = torch.randn(8732, 4)
#>>> pos_idx = priors > 0
#>>> pos_idx = pos_idx.unsqueeze(0).expand_as(torch.randn(32,8732,4))
#>>> priors[pos_idx]
Traceback (most recent call last):
File "", line 1, in
but if they are Variable, it works well
#>>> from torch.autograd import Variable
#>>> priors = Variable(priors)
#>>> pos_idx = priors > 0
#>>> pos_idx = pos_idx.unsqueeze(0).expand_as(torch.randn(32,8732,4))
#>>> priors[pos_idx]
Variable containing:
8.5339e-01
5.7070e-01
4.0415e-01
⋮
8.4663e-01
2.7078e-01
2.4516e+00
[torch.FloatTensor of size 561664]
#>>> type(priors)
<class 'torch.autograd.variable.Variable'>
#>>> type(pos_idx)
<class 'torch.autograd.variable.Variable'>
However, It failed when i used pytorch0.4.0(python3.6.2). It came out this error,'IndexError: too many indices for tensor of dimension 2'. If you want to run through this code with pytorch0.4.0 , you can add 'priors = priors.expand_as(pos_idx)' before this code.
The above is just what I found through the experiment. I am confused with this problem,too. If you figure it out, pls let me know...thx.
@AllenMao What's your python version ? What 's your error message ?
@huangzsdy the same as you. My environment is python3.5.5 and pytorch0.3.1
priors = priors.expand_as(pos_idx)
print(priors[pos_idx].view(-1, 4))
and
priors = Variable(priors)
print(priors[pos_idx].view(-1, 4))
, their results is same on pytoch-0.3.1.
if use 'priors = priors.expand_as(pos_idx)' or pytorch-0.4.0, it came out error in later. so i use pytoch-0.3.1 and run well.
I think I've found a solution, and it works.
My python version is 3.6.9 and pytorch version is 1.1.0.
Firstly, as @huangzsdy said, add priors = priors.expand_as(pos_idx)
before priors = priors[pos_idx].view(-1, 4)
in multibox_loss.py
Secondly, change loss_c[pos] = 0
to loss_c[pos.view(-1, 1)] = 0
.
Thirdly, change loss_l /= N loss_l_repul /= N loss_c /= N
to loss_l /= N.float() loss_l_repul /= N.float() loss_c /= N.float()
.
And finally, change all loss.data[0]
or loss_*.data[0]
to loss.data
or loss_*.data
in train.py. Besides, if your loss value is Nan, you should reduce the value of the learning-rate.