Sheng Wan
Sheng Wan
Thanks for any suggestion
I haven't tried the model, but look into file: datasets/twitter, there's no file named metadata.pkl. Guess it may be in the zipped file. Try unzip it manually first.
try bigger training dataset
你是说前面都一样还是只是末尾几个字符一样?
回复一样一开始是因为学习率太大了,batch size又小,但后来还是有很多问题学习不到,可能还是数据或者batch size的问题
1.8.0可用
batch() doesn't require this argument shuffle_batch() does
reduce batch size, hidden layer number, hidden state number, etc.
A precise Word segmentation tool is definitely helpful to reduce the vocabulary size. Alternative ways: try copy net