practical_seq2seq icon indicating copy to clipboard operation
practical_seq2seq copied to clipboard

Importing using the last check point

Open charan16 opened this issue 8 years ago • 3 comments

Attempting to use uninitialized value decoder/embedding_rnn_seq2seq/embedding_rnn_decoder/embedding/Adam_1 [[Node: decoder/embedding_rnn_seq2seq/embedding_rnn_decoder/embedding/Adam_1/read = IdentityT=DT_FLOAT, _class=["loc:@decoder/embedding_rnn_seq2seq/embedding_rnn_decoder/embedding"], _device="/job:localhost/replica:0/task:0/cpu:0"]]

charan16 avatar Feb 03 '17 07:02 charan16

I'm having the same issue.

superMDguy avatar Apr 25 '17 01:04 superMDguy

Me too.Anybody got the fix?

rojansudev avatar Aug 04 '17 06:08 rojansudev

Hey I was having this problem too. The problem is you're likely not importing the ckpt files.

Turns out that running the code straight will not give you an error if you haven't actually loaded the checkpoint file since the code will skip over the loading if it doesn't find the files.

 def restore_last_session(self):
    saver = tf.train.Saver()
    # create a session
    sess = tf.Session()
    # get checkpoint state
    ckpt = tf.train.get_checkpoint_state(self.ckpt_path) 
    # restore session
    if ckpt and ckpt.model_checkpoint_path: [<<<<<<<<<<<<<]
        saver.restore(sess, ckpt.model_checkpoint_path)
    # return to user
    return sess

To fix it this issue, you need to do either one of the following after pulling and decompressing the model.

  1. make sure to either have all the ckpt files directly in the ckpt folder

  2. modify the ckpt_path (line 29 in chatbot.py) to be

    ckpt = 'ckpt/seq2seq_twitter_1024x3h_i43000'

*assuming that your uncompressed folder is named the same as mine. If not then change seq2seq_twitter_1024x3h_i43000 to whatever you've named it.

That solved my problem and will likely fix yours.

nunezpaul avatar Nov 14 '17 03:11 nunezpaul