pytorch-sgns icon indicating copy to clipboard operation
pytorch-sgns copied to clipboard

why initialize the first line of embedding matrix as 0? t.zeros(1, self.embedding_size)

Open yl-jiang opened this issue 5 years ago • 5 comments

yl-jiang avatar Jul 09 '18 14:07 yl-jiang

Uh, I wanted to set the line on padding_idx to be 0, but in the code it initializes the first line even if padding_idx is not zero. Would you PR it or should I fix it?

theeluwin avatar Jul 09 '18 16:07 theeluwin

@theeluwin Oh, I just do not understand your initialization. I suppose that there must be some reasons which I do not know, so I want to ask about it. And I rewrote a copy of your own code but it doesn't work, do you mind to look at my code help me check the error,thank you for your replay.

yl-jiang avatar Jul 10 '18 01:07 yl-jiang

padding_idx denotes, literally, a padding index for a padding word. The padding word should have zero vector (usually), so it should be initialized as so. About your code, I'm not really sure if I can help you but if it's possible, I'll try.

theeluwin avatar Jul 11 '18 15:07 theeluwin

Hi @theeluwin, here the padding word you mean is, of course, the "unknown word" right? And is it normal to initialize it by zeros? BTW, thanks for sharing your git. I am a newbie in language stuff and it really helps me a lot! :)

Gyeongjo avatar Aug 09 '18 17:08 Gyeongjo

@GyeongJoHwang Normally, the "unknown word", or UNK token denotes rare word rather than padding, so making it 0 might be a solution but usually we use random vector or mean vector of all vocabulary. Padding tokens are like, when you want to make some sentences to have equal length (for batch training or so), for example, to make 13-length-ed sentence into 20-length-ed, you should "pad" 7 tokens with, probably zero.

theeluwin avatar Aug 10 '18 05:08 theeluwin