deepnl
deepnl copied to clipboard
dl-sentiword.py word2vec variant training is extremely slow !
Hi,
dl-sentiwords.py training.tsv --vocab SSWE_words.txt --vectorsSSWE_vectors.txt --variant word2vec --w 3 -e 50 --eps 10 --min-occur 10 --threads 30
My training set is a file of 1.5 million tweet, and the execution for 17h hours just moved by 1 or 2 epochs ?
Any ideas to make things little faster please ?
Regards.
There is the same problem with my data. Anyone have idea?
I am also facing the same issue. But more importantly, with each epoch error value is increasing and accuracy always remains 0.00 . Can anyone explain why the error value is not decreasing?
Total 64G RAM is used after 3 days and the code just reach 13 epochs!!!! See the graphic.....
Anyone has the solution???????