Pytorch-NCE icon indicating copy to clipboard operation
Pytorch-NCE copied to clipboard

why squeeze here?

Open chaoqing opened this issue 5 years ago • 3 comments

Hi, I think there is a bug here:

https://github.com/Stonesjtu/Pytorch-NCE/blob/862afc666445dca4ce9d24a3eb1e073255edb92e/nce.py#L198

For RNN model which the last layer before softmax has shape [B * N * D] where time steps N>1, I believe the squeeze do not have any effect. Maybe for batch size B=1? If that is the case, squeeze(0) might be a better choice.

I am using your code for predicting the last state (in other words, N=1). The squeeze here will give a model_loss.shape = (B , 1) and noise_loss.shape = (B,) and then the total loss.shape = (B, B), which should be (B,1) I think.

chaoqing avatar Aug 09 '18 09:08 chaoqing

Sorry, I have another problem here. I think what NCE does is just computing the loss on selected target class, which should involve only parts of the weights/biases for linear layer. Say the last linear layer have bias.shape = (V, ), this indices should be the correct output and some sampled noise class and index_select make parts of class (should be total number of len(set(indices.view(-1).numpy()))) invoved.

https://github.com/Stonesjtu/Pytorch-NCE/blob/862afc666445dca4ce9d24a3eb1e073255edb92e/index_linear.py#L60

But when I look into the gradient after the back propagation,

https://github.com/Stonesjtu/Pytorch-NCE/blob/862afc666445dca4ce9d24a3eb1e073255edb92e/main.py#L125

the number of non-zero gradient for bias (should be (list(model.parameters())[-1].grad!=0).sum().item()) is always smaller than selected class number.

Do you have any idea why this happened? I am checking this because I think we should do a sparse parameter update in advanced optimizer like Adam, or the zero gradient of parameters related to un-selected class may destroy the momentum. And in this way, it may also help solving the speed issue.

chaoqing avatar Aug 10 '18 08:08 chaoqing

@chaoqing squeeze(0) is definitely a better choice, as you said, squeeze will remove all the dim with size=1, which is unexpected for N=1. PR is appreciated.

for the non-zero elements mismatching, I suspect that .grad != 0 returns an uint8 value (https://pytorch.org/docs/master/torch.html#torch.eq), and it's highly possible to overflow when you perform a summation. Can you try (list(model.parameters())[-1].grad!=0).long().sum().item()?

Stonesjtu avatar Aug 13 '18 10:08 Stonesjtu

@Stonesjtu the non-zero elements mismatching still exists after I tried your check. Actually at first I checked the non-zero position and found out the non-zero gradient is always part of the truth or sampled ones, which is expected. We may need look into the how the gradient is formulated to find out why part of touched samples have zero gradient.

chaoqing avatar Aug 14 '18 01:08 chaoqing