BiMPM icon indicating copy to clipboard operation
BiMPM copied to clipboard

Can we have the configuration files to reproduce the results in the paper?

Open xycforgithub opened this issue 7 years ago • 5 comments

Hi, I'm trying to reproduce your result on SNLI - can we have the configuration files for that? Some configurations seems unclear to me. E.g., NER_dim, POS_dim, max_char_per_word, etc. Can we reproduce all the results using the default parameters? Thank you very much!

xycforgithub avatar Jun 24 '17 04:06 xycforgithub

I didn't use NER_dim and POS_dim for the SNLI experiment. These options are added for some other internal experiments. These options can be activated by setting "with_NER" and "with_POS" as true.

I guess you can reproduce my results if you use the similar config file as https://drive.google.com/file/d/0B0PlTAo--BnaQ3N4cXR1b0Z0YU0/view

zhiguowang avatar Jun 25 '17 20:06 zhiguowang

Thanks for the response! However I only get accuracy 82.05% on SNLI with the setting in the config file. Have you run the experiment using that file?

xycforgithub avatar Jun 27 '17 21:06 xycforgithub

I'm on vocation now. I will find my config for you when I back to work.

On Tue, Jun 27, 2017 at 5:48 PM, xycforgithub [email protected] wrote:

Thanks for the response! However I only get accuracy 82.05% on SNLI with the setting in the config file. Have you run the experiment using that file?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/zhiguowang/BiMPM/issues/10#issuecomment-311495981, or mute the thread https://github.com/notifications/unsubscribe-auth/AYEV-WWZBUMrItoPKpEr1195ldMYVhFtks5sIXiSgaJpZM4OEONK .

zhiguowang avatar Jun 28 '17 19:06 zhiguowang

Here is one of my config for SNLI experiment: MP_dim=10, NER_dim=20, POS_dim=20, aggregation_layer_num=2, aggregation_lstm_dim=100, base_dir='/u/zhigwang/zhigwang1/sentence_match/snli', batch_size=60, char_emb_dim=20, char_lstm_dim=100, context_layer_num=2, context_lstm_dim=100, dropout_rate=0.1, fix_word_vec=True, highway_layer_num=1, lambda_l2=0.0, learning_rate=0.001, lex_decompsition_dim=-1, max_char_per_word=10, max_epochs=10, max_sent_length=100, optimize_type='adam', suffix='snli_7', with_NER=False, with_POS=False, with_aggregation_highway=True, with_filter_layer=False, with_highway=True, with_lex_decomposition=False, with_match_highway=True, wo_attentive_match=False, wo_char=False, wo_full_match=False, wo_left_match=False, wo_max_attentive_match=False, wo_maxpool_match=False, wo_right_match=False, word_level_MP_dim=-1

With this config, I got 87.31 on dev set.

zhiguowang avatar Jul 05 '17 13:07 zhiguowang

Could you please also share the configuration file for the WikiQA and TrecQA experiments to achieve your best results in the paper? Thank you very much!

cactiball avatar Jul 06 '17 14:07 cactiball