CGMH
CGMH copied to clipboard
how to try this model ?
I have downloaded the pertained model and try to run in cgmh/ python key_gen.py or cgmh/key_gen/ python key_gen.py. There is a mistake saying that
Traceback (most recent call last):
File "key_gen.py", line 10, in
above the mistake : DataLossError (see above for traceback): Checksum does not match: stored 3660790191 vs. calculated on the restored bytes 2272260144 [[Node: save_1/RestoreV2_3 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save_1/Const_0_0, save_1/RestoreV2_3/tensor_names, save_1/RestoreV2_3/shape_and_slices)]] [[Node: save_1/restore_all/NoOp_1/_38 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_39_save_1/restore_all/NoOp_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]]
Hello, I am in the same case of you; I copy dict_utils.py from utils/dict_emb/ to key_gen/
Hello, I am in the same case of you; I copy dict_utils.py from utils/dict_emb/ to key_gen/
did you finished the whole procedure and generated the sentence as is shown in paper?
No, sorry, actually, I am using the system to paraphrase my own corpus. I am currently running a forward pass Nevertheless, I see in your config.py that some files are missing for you: ./POS/english-models. These files were missing for me too. So, I mailed Ning Miao, and he answered me to download these files: https://github.com/frcchang/zpar/releases/download/v0.7.5/english-models.zip Also, in key_gen.py, replace your function run_epoch() by:
def run_epoch(sess, model, input, sequence_length, target=None, mode='train'):
#Runs the model on the given data.
if mode=='train':
#train language model
_,cost = sess.run([model._train_op, model._cost], feed_dict={model._input: input, model._target:target, model._sequence_length:sequence_length})
return cost
elif mode=='test':
#test language model
cost = sess.run(model._cost, feed_dict={model._input: input, model._target:target, model._sequence_length:sequence_length})
return cost
else:
#use the language model to calculate sentence probability
output_prob = sess.run(model._output_prob, feed_dict={model._input: input, model._sequence_length:sequence_length})
return output_prob
No, sorry, actually, I am using the system to paraphrase my own corpus. I am currently running a forward pass Nevertheless, I see in your config.py that some files are missing for you: ./POS/english-models. These files were missing for me too. So, I mailed Ning Miao, and he answered me to download these files: https://github.com/frcchang/zpar/releases/download/v0.7.5/english-models.zip Also, in key_gen.py, replace your function run_epoch() by:
def run_epoch(sess, model, input, sequence_length, target=None, mode='train'): #Runs the model on the given data. if mode=='train': #train language model _,cost = sess.run([model._train_op, model._cost], feed_dict={model._input: input, model._target:target, model._sequence_length:sequence_length}) return cost elif mode=='test': #test language model cost = sess.run(model._cost, feed_dict={model._input: input, model._target:target, model._sequence_length:sequence_length}) return cost else: #use the language model to calculate sentence probability output_prob = sess.run(model._output_prob, feed_dict={model._input: input, model._sequence_length:sequence_length}) return output_prob
Thank you for your help! that works! But the 'checksum does not exist' error still exists. """ 2019-03-01 22:04:23.564725: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: Checksum does not match: stored 1220741561 vs. calculated on the restored bytes 696859937 2019-03-01 22:04:23.565097: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: Checksum does not match: stored 1059332909 vs. calculated on the restored bytes 3559770914 2019-03-01 22:04:23.565205: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: Checksum does not match: stored 3660790191 vs. calculated on the restored bytes 2272260144 2019-03-01 22:04:23.568547: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: Checksum does not match: stored 289024584 vs. calculated on the restored bytes 522105483 2019-03-01 22:04:23.569561: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: Checksum does not match: stored 2212803301 vs. calculated on the restored bytes 3139336665 2019-03-01 22:04:23.613723: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: Checksum does not match: stored 800486089 vs. calculated on the restored bytes 2000218608 2019-03-01 22:04:23.614614: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: Checksum does not match: stored 1629221737 vs. calculated on the restored bytes 2260531024
""" I am thinking, the model params in config.py should change. I do changed the vocab_size before to aviod a mistake. or I will try to train a model. Thanks again!
Sorry, I don't have this issue... I hope you will fix it :)
Thank you for pointing out the problem. The problem results from a broken checkpoint file and I have already updated the file. Some path problems have also been fixed. Please try new version of CGMH code and download a new checkpoint file. I'm very sorry for the trouble.
@CroquetteTheThe do you have the issue : FileNotFoundError: [Errno 2] No such file or directory: '../data/1-billion/1-billion.txt'?