Facebook-Messenger-Bot
Facebook-Messenger-Bot copied to clipboard
wordVectors in seq2seq
Hi, I have question on using wordVectors in seq2seq.py since I have to adapt to my very long sentences dataset (I saw the limited length of sentence is 15 in the createTrainingMatrices function in seq2seq.py, but my datasets are almost short paragraphs^_^). Although the createTrainingMatrices function in seq2seq.py can help to create new vectors for every sentence using index of words, why not using the pre-trained embeddingMatrix.npy produced by word2vec.py? "wordVectors = np.load('models/embeddingMatrix.npy') " has been stated in seq2seq.py, but doesn't been used actually?