word2vec_pytorch
word2vec_pytorch copied to clipboard
请问为什么需要初始化为[-0.5/ebd_size,-0.5/ebd_size]
如题,这样做有什么好处吗? 谢谢!
这个原先的C语言源码好像就是这样初始化的 可以看下下面链接的6.7节. https://blog.csdn.net/itplus/article/details/37999613
This is identical to the original word2vec implementation. I tried other implementation methods, it turned out that some of them will make the vector value to NaN or Inf and some other ones will show similar performance.