nqg icon indicating copy to clipboard operation
nqg copied to clipboard

Tensorflow version

Open zhezhaoa opened this issue 6 years ago • 3 comments

I am really happy seeing question generation work like this. However, i am not familiar with torch, is there any resource written in tensrflow that can reproduce your work?

zhezhaoa avatar Mar 23 '18 10:03 zhezhaoa

I see that you implemented seq2seq model in tf. Could you provide the source code or which toolkit you use? I have tried many times with various seq2seq models. However, the generated questions are strange and unrelated with the source sentence. The generated questions even don't share words with source sentences. Could you give me some suggestions?

zhezhaoa avatar Mar 25 '18 04:03 zhezhaoa

sorry, but i don't have the tf code easily available now. i think a bunch of small tricks might affect a lot.

On Sun, Mar 25, 2018 at 12:18 AM, zhezhaoa [email protected] wrote:

I see that you implemented seq2seq model in tf. Could you provide the source code or which toolkit you use? I have tried many times with various seq2seq models. However, the generated questions are strange and unrelated with the source sentence. The generated questions even don't share words with source sentences. Could you give me some suggestions?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/xinyadu/nqg/issues/16#issuecomment-375944359, or mute the thread https://github.com/notifications/unsubscribe-auth/AJb8YYjleBuoPrU7wqp0J1kc0FGEA_Erks5thxqJgaJpZM4S4d0a .

xinyadu avatar Mar 26 '18 02:03 xinyadu

Hi, thank you very much for your kind suggestion~ I am a rookie in seq2seq and now I am confused with the evaluation. I directly use the opennmt-tf and multi-bleu-detok.perl is used for evaluation. The returned Bleu result is 2.21 (or 0.0221?) However, with the same output, the returned results of eval.py (the evaluation script in this toolkit) are Bleu1: 0.263 Bleu2: 0.10017 Bleu3: 0.04796 Bleu4: 0.026. It seems that results obtained from your evaluation script are much higher than the results from opennmt-tf. I wonder why there is a big gap between opennmt evaluation and evaluation script in your toolkit. I am look forward to your reply~ This toolkit is extremely useful and important to me!

zhezhaoa avatar Mar 27 '18 15:03 zhezhaoa