PQG-pytorch
PQG-pytorch copied to clipboard
Evaluate Code
It seems like that after I trained the model with our own dataset using the code from master branch, I can not evaluate the model on the test set unless switching to ori- branch. However, I don't want to waste 2days training the model. What can I do to evaluate the testing data in the master branch.
I check the code between the ori-code branch and the master branch. The checkpoint in the save model has a big difference. One save the encoder and generator. Another just simply save the model.
In master branch you can use evaluate_scores
function to evaluate on your own data, if you have saved your model then you just have to generate para phrases and then you can pass generated para phrase and ground truth to evaluate_scores
function which will return the scores w.r.t. your ground truth in data and generated para phrases.
Thanks a lot.
Which model in the paper is the best. EDLPGS?
Its EDLPS in terms of Bleu scores