pytorch-sgns
pytorch-sgns copied to clipboard
why initialize the first line of embedding matrix as 0? t.zeros(1, self.embedding_size)
Uh, I wanted to set the line on padding_idx
to be 0, but in the code it initializes the first line even if padding_idx
is not zero. Would you PR it or should I fix it?
@theeluwin Oh, I just do not understand your initialization. I suppose that there must be some reasons which I do not know, so I want to ask about it. And I rewrote a copy of your own code but it doesn't work, do you mind to look at my code help me check the error,thank you for your replay.
padding_idx
denotes, literally, a padding index for a padding word. The padding word should have zero vector (usually), so it should be initialized as so.
About your code, I'm not really sure if I can help you but if it's possible, I'll try.
Hi @theeluwin, here the padding word you mean is, of course, the "unknown word" right? And is it normal to initialize it by zeros? BTW, thanks for sharing your git. I am a newbie in language stuff and it really helps me a lot! :)
@GyeongJoHwang Normally, the "unknown word", or UNK token denotes rare word rather than padding, so making it 0 might be a solution but usually we use random vector or mean vector of all vocabulary. Padding tokens are like, when you want to make some sentences to have equal length (for batch training or so), for example, to make 13-length-ed sentence into 20-length-ed, you should "pad" 7 tokens with, probably zero.