conversation-tensorflow
conversation-tensorflow copied to clipboard
TensorFlow implementation of Conversation Models
在model.py文件中,对targets的处理你的代码是这样的: self.decoder_inputs = labels decoder_input_shift_1 = tf.slice(self.decoder_inputs, [0, 1], [batch_size, Config.data.max_seq_length-1]) pad_tokens = tf.zeros([batch_size, 1], dtype=tf.int32) # make target (right shift 1 from decoder_inputs) self.targets = tf.concat([decoder_input_shift_1, pad_tokens], axis=1) 我看了一些其他人的代码,有些对self.targets直接用label,有些用以下代码,即左边增加标识符号,右边去掉最后一个字符,这个有啥讲究吗?...
When I run your project in tf 1.6 or tf 1.7, I got an error: TypeError: The two structures don't have the same sequence type. First structure has type ,...
when running python main.py --config cornell-movie-dialogs --mode train_and_evaluate I get the error message > load vocab ... > vocab size: 41676 > make Training data and Test data Start.... >...
Running: python main.py --config cornell-movie-dialogs --mode train to the end (100000 steps) will result in a training loss of about 2.6, test loss of 8.4. Which hyperparameters did you use?...
구현은 했는데 if문이 너무 많아져서 LSTM인경우와 bi인경우 계속 tf.stack이 다르게 작동하네요 ㅠㅠ 왜그럴까요? 특히 LSTM, GRU(RNN)인 경우에 output의 모양과 타입이 달라서 tf.stack([output])해줘야하는경우도 있고 tf.stack(output)해줘야 되는경우도 있고 그러네요 ㅠㅠ 흠..
Hi, Is it possible to use customer service conversation as data set with this model ? such as ``` Hello May I help you ? I want to ask something...