tf_chatbot_seq2seq_antilm
tf_chatbot_seq2seq_antilm copied to clipboard
--reinforce_learn doesn't work!
On CPU setting: after the following changes: @@ -331,7 +332,9 @@ class Seq2SeqModel(object): while True: #----[Step]---------------------------------------- encoder_state, step_loss, output_logits = self.step(session, encoder_inputs, decoder_inputs, target_weights,
-
bucket_id, training=False, force_dec_input=False)
-
bucket_id, forward_only=False, force_dec_input=False)
and
@@ -395,7 +399,7 @@ class Seq2SeqModel(object):
# step
_, _, output_logits = self.step(session, encoder_inputs, decoder_inputs, target_weights,
-
bucket_id, training=False, force_dec_input=True)
-
bucket_id, forward_only=False, force_dec_input=True)
I still got at function logits2tokens:
Creating 4 layers of 256 units.
Created model with fresh parameters.
Reading development and training data (limit: 0).
reading data line 100000
reading data line 200000
[INPUT]: [b'\xe8\xae\x93', b'\xe4\xbd\xa0', b'\xe8\xbd\x89\xe9\x81\x8e', b'\xe8\xba\xab\xe8\x83\x8c', b'\xe5\xb0\x8d', b'\xe8\x91\x97', b'\xe6\x88\x91', b'\xe7\x82\xba', b'\xe4\xbd\xa0', b'\xe7\xb9\xab', b'\xe4\xb8\x8a', b'\xe6\x88\x91', b'\xe7\x9a\x84', b'\xe6\x89\xbf', b'\xe8\xab\xbe', b'_PAD', b'_PAD', b'_PAD', b'_PAD', b'_PAD']
output_logits is: None
Traceback (most recent call last):
File "main.py", line 28, in
did you get something about --reinforce_learn not work ?
@yogesh-0586 still struggling
For first "step()" (original)one of parameter : training =False, which means do the predict , that is , forward_only=True however, I don't know whether the "force_dec_input" should be set True or False. i am working on the reinforce_learn too.