bert-event-extraction icon indicating copy to clipboard operation
bert-event-extraction copied to clipboard

have you tried to use bert to improve the performance of JMEE?

Open xiaoya6666 opened this issue 5 years ago • 11 comments

Hi, Thank you for sharing. I'm interested if you tried to use bert to improve the performance of JMEE. I try to reproduce JMEE,but I can't achieve the result of paper.

xiaoya6666 avatar Dec 11 '19 13:12 xiaoya6666

Hi @xiaoya6666

I couldn't achieve the performance of the JMEE paper too. The following results were printed on my console:

python -m enet.run.ee.runner --train "ace-05-splits/train.json"  --test "ace-05-splits/test.json" --dev "ace-05-splits/dev.json" --earlystop 10 --restart 10 --optimizer "adadelta" --lr 1 --webd "./ace-05-splits/glove.6B.300d.txt" --batch 8 --epochs 99999 --device "cuda:0" --out "models/enet-081" --hps "{'wemb_dim': 300, 'wemb_ft': True, 'wemb_dp': 0.5, 'pemb_dim': 50, 'pemb_dp': 0.5, 'eemb_dim': 50, 'eemb_dp': 0.5, 'psemb_dim': 50, 'psemb_dp': 0.5, 'lstm_dim': 220, 'lstm_layers': 1, 'lstm_dp': 0, 'gcn_et': 3, 'gcn_use_bn': True, 'gcn_layers': 3, 'gcn_dp': 0.5, 'sa_dim': 300, 'use_highway': True, 'loss_alpha': 5}"

Epoch 40  dev loss:  3.0913915507072076 
dev ed p:  0.48264984227129337  dev ed r:  0.6375  dev ed f1:  0.5493716337522442 
dev ae p:  0.2878411910669975  dev ae r:  0.1281767955801105  dev ae f1:  0.17737003058103976
Epoch 40  test loss:  2.784576788090766 
test ed p:  0.3360323886639676  test ed r:  0.590047393364929  test ed f1:  0.4282029234737747 
test ae p:  0.20881226053639848  test ae r:  0.12219730941704036  test ae f1:  0.15417256011315417

Epoch 80  dev loss:  3.8771536317780955 
dev ed p:  0.5329949238578681  dev ed r:  0.65625  dev ed f1:  0.5882352941176472 
dev ae p:  0.24006908462867013  dev ae r:  0.15359116022099448  dev ae f1:  0.18733153638814018
Epoch 80  test loss:  3.8047063166558157 
test ed p:  0.3799705449189985  test ed r:  0.6113744075829384  test ed f1:  0.46866485013623976 
test ae p:  0.22857142857142856  test ae r:  0.18834080717488788  test ae f1:  0.20651505838967424

Epoch 120  dev loss:  4.38567394134314 
dev ed p:  0.572992700729927  dev ed r:  0.6541666666666667  dev ed f1:  0.6108949416342413 
dev ae p:  0.23627287853577372  dev ae r:  0.1569060773480663  dev ae f1:  0.18857901726427623
Epoch 120  test loss:  4.248081724495084 
test ed p:  0.40793650793650793  test ed r:  0.6090047393364929  test ed f1:  0.48859315589353614 
test ae p:  0.2297476759628154  test ae r:  0.19394618834080718  test ae f1:  0.21033434650455926

Epoch 160  dev loss:  4.3482774938757345 
dev ed p:  0.574585635359116  dev ed r:  0.65  dev ed f1:  0.6099706744868035 
dev ae p:  0.23304347826086957  dev ae r:  0.14806629834254142  dev ae f1:  0.18108108108108106
Epoch 160  test loss:  4.217268991275621 
test ed p:  0.41423948220064727  test ed r:  0.6066350710900474  test ed f1:  0.49230769230769234 
test ae p:  0.23285714285714285  test ae r:  0.1827354260089686  test ae f1:  0.20477386934673367

Epoch 199  dev loss:  4.394452537438701 
dev ed p:  0.5831775700934579  dev ed r:  0.65  dev ed f1:  0.6147783251231527 
dev ae p:  0.23861566484517305  dev ae r:  0.14475138121546963  dev ae f1:  0.1801925722145805
Epoch 199  test loss:  4.19947991046335 
test ed p:  0.4169381107491857  test ed r:  0.6066350710900474  test ed f1:  0.4942084942084942 
test ae p:  0.2422907488986784  test ae r:  0.18497757847533633  test ae f1:  0.20979020979020976

I think word embedding of JMEE can be replaced with BERT, but I haven't tried it yet.

bowbowbow avatar Dec 12 '19 06:12 bowbowbow

@xiaoya6666 I also try to reproduce JMEE,but can't achieve the result of paper. I try to use Bert to replace the word embedding of JMEE ,but ed F1 just achieve 0.69. It seems that GCN in JMME paper actually can't bring very obvious performance. Maybe I combine GCN and bert in a inappropriate way.

ll0ruc avatar Dec 13 '19 05:12 ll0ruc

Hi @xiaoya6666

I couldn't achieve the performance of the JMEE paper too. The following results were printed on my console:

python -m enet.run.ee.runner --train "ace-05-splits/train.json"  --test "ace-05-splits/test.json" --dev "ace-05-splits/dev.json" --earlystop 10 --restart 10 --optimizer "adadelta" --lr 1 --webd "./ace-05-splits/glove.6B.300d.txt" --batch 8 --epochs 99999 --device "cuda:0" --out "models/enet-081" --hps "{'wemb_dim': 300, 'wemb_ft': True, 'wemb_dp': 0.5, 'pemb_dim': 50, 'pemb_dp': 0.5, 'eemb_dim': 50, 'eemb_dp': 0.5, 'psemb_dim': 50, 'psemb_dp': 0.5, 'lstm_dim': 220, 'lstm_layers': 1, 'lstm_dp': 0, 'gcn_et': 3, 'gcn_use_bn': True, 'gcn_layers': 3, 'gcn_dp': 0.5, 'sa_dim': 300, 'use_highway': True, 'loss_alpha': 5}"

Epoch 40  dev loss:  3.0913915507072076 
dev ed p:  0.48264984227129337  dev ed r:  0.6375  dev ed f1:  0.5493716337522442 
dev ae p:  0.2878411910669975  dev ae r:  0.1281767955801105  dev ae f1:  0.17737003058103976
Epoch 40  test loss:  2.784576788090766 
test ed p:  0.3360323886639676  test ed r:  0.590047393364929  test ed f1:  0.4282029234737747 
test ae p:  0.20881226053639848  test ae r:  0.12219730941704036  test ae f1:  0.15417256011315417

Epoch 80  dev loss:  3.8771536317780955 
dev ed p:  0.5329949238578681  dev ed r:  0.65625  dev ed f1:  0.5882352941176472 
dev ae p:  0.24006908462867013  dev ae r:  0.15359116022099448  dev ae f1:  0.18733153638814018
Epoch 80  test loss:  3.8047063166558157 
test ed p:  0.3799705449189985  test ed r:  0.6113744075829384  test ed f1:  0.46866485013623976 
test ae p:  0.22857142857142856  test ae r:  0.18834080717488788  test ae f1:  0.20651505838967424

Epoch 120  dev loss:  4.38567394134314 
dev ed p:  0.572992700729927  dev ed r:  0.6541666666666667  dev ed f1:  0.6108949416342413 
dev ae p:  0.23627287853577372  dev ae r:  0.1569060773480663  dev ae f1:  0.18857901726427623
Epoch 120  test loss:  4.248081724495084 
test ed p:  0.40793650793650793  test ed r:  0.6090047393364929  test ed f1:  0.48859315589353614 
test ae p:  0.2297476759628154  test ae r:  0.19394618834080718  test ae f1:  0.21033434650455926

Epoch 160  dev loss:  4.3482774938757345 
dev ed p:  0.574585635359116  dev ed r:  0.65  dev ed f1:  0.6099706744868035 
dev ae p:  0.23304347826086957  dev ae r:  0.14806629834254142  dev ae f1:  0.18108108108108106
Epoch 160  test loss:  4.217268991275621 
test ed p:  0.41423948220064727  test ed r:  0.6066350710900474  test ed f1:  0.49230769230769234 
test ae p:  0.23285714285714285  test ae r:  0.1827354260089686  test ae f1:  0.20477386934673367

Epoch 199  dev loss:  4.394452537438701 
dev ed p:  0.5831775700934579  dev ed r:  0.65  dev ed f1:  0.6147783251231527 
dev ae p:  0.23861566484517305  dev ae r:  0.14475138121546963  dev ae f1:  0.1801925722145805
Epoch 199  test loss:  4.19947991046335 
test ed p:  0.4169381107491857  test ed r:  0.6066350710900474  test ed f1:  0.4942084942084942 
test ae p:  0.2422907488986784  test ae r:  0.18497757847533633  test ae f1:  0.20979020979020976

I think word embedding of JMEE can be replaced with BERT, but I haven't tried it yet.

@bowbowbow exm, could you tell me how to set the parameters in this bert model? thank you very much!

xiaoya6666 avatar Dec 24 '19 04:12 xiaoya6666

@xiaoya6666 I also try to reproduce JMEE,but can't achieve the result of paper. I try to use Bert to replace the word embedding of JMEE ,but ed F1 just achieve 0.69. It seems that GCN in JMME paper actually can't bring very obvious performance. Maybe I combine GCN and bert in a inappropriate way.

exm, do you have an overfitting problem when you replaced word embedding into BERT in JMEE? and What is the F1 score of 0.69 you got, argument classification or trigger classtication? I found it will have very serious overfitting problem when i add the entity type embedding into the model.

xiaoya6666 avatar Dec 29 '19 02:12 xiaoya6666

我没有注意到这个问题啊,我触发词预测能到70%,但是都预测为O预测效果不是会降低么?但我触发词预测的效果还算比较稳定,主要是论元提不上去

2019-12-31

lxyyy123

发件人:Hanlard [email protected] 发送时间:2019-12-31 11:20 主题:Re: [nlpcl-lab/bert-event-extraction] have you tried to use bert to improve the performance of JMEE? (#4) 收件人:"nlpcl-lab/bert-event-extraction"[email protected] 抄送:"xiaoya6666"[email protected],"Mention"[email protected]

我观察了一下, 好像是因为, step(batch_size=8)大于20之后, 触发词都预测为"O"了, 然后loss一直等于trigger_loss, argument_loss就不再下降了, 角色识别效果就提不上去了, 我感觉还是模型有些简单了... — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

xiaoya6666 avatar Dec 31 '19 08:12 xiaoya6666

现在bert这个代码就是参考JMEE的论元抽取方法的吧。emmm 2020-01-02

lxyyy123

发件人:Hanlard [email protected] 发送时间:2020-01-02 11:16 主题:Re: [nlpcl-lab/bert-event-extraction] have you tried to use bert to improve the performance of JMEE? (#4) 收件人:"nlpcl-lab/bert-event-extraction"[email protected] 抄送:"xiaoya6666"[email protected],"Mention"[email protected]

是我搞错了,抱歉。论元我觉得可以参考一下jmee的模型发自我的华为手机-------- 原始邮件 --------发件人: xiaoya6666 [email protected]日期: 2019年12月31日周二 下午4:38收件人: nlpcl-lab/bert-event-extraction [email protected]抄送: Hanlard [email protected], Comment [email protected]主 题: Re: [nlpcl-lab/bert-event-extraction] have you tried to use bert to improve the performance of JMEE? (#4)我没有注意到这个问题啊,我触发词预测能到70%,但是都预测为O预测效果不是会降低么?但我触发词预测的效果还算比较稳定,主要是论元提不上去

2019-12-31

lxyyy123

发件人:Hanlard [email protected]

发送时间:2019-12-31 11:20

主题:Re: [nlpcl-lab/bert-event-extraction] have you tried to use bert to improve the performance of JMEE? (#4)

收件人:"nlpcl-lab/bert-event-extraction"[email protected]

抄送:"xiaoya6666"[email protected],"Mention"[email protected]

我观察了一下, 好像是因为, step(batch_size=8)大于20之后, 触发词都预测为"O"了, 然后loss一直等于trigger_loss, argument_loss就不再下降了, 角色识别效果就提不上去了, 我感觉还是模型有些简单了...

You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

—You are receiving this because you commented.Reply to this email directly, view it on GitHub, or unsubscribe. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

xiaoya6666 avatar Jan 02 '20 03:01 xiaoya6666

@xiaoya6666 您好,你的触发词70%是bert的代码还是jmee的,能加下QQ详聊吗(2512156864)

ll0ruc avatar Jan 02 '20 05:01 ll0ruc

bert的,我用的是那个bert-base-uncased模型微调的

2020-01-03

lxyyy123

发件人:ScuLilei2014 [email protected] 发送时间:2020-01-02 13:16 主题:Re: [nlpcl-lab/bert-event-extraction] have you tried to use bert to improve the performance of JMEE? (#4) 收件人:"nlpcl-lab/bert-event-extraction"[email protected] 抄送:"xiaoya6666"[email protected],"Mention"[email protected]

@xiaoya6666 您好,你的触发词70%是bert的代码还是jmee的,能加下QQ详聊吗(2512156864) — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

xiaoya6666 avatar Jan 03 '20 11:01 xiaoya6666

I also got F1 for trigger classification around 69 with BERT + linear classification layer. But this is way below results reported from paper (https://www.aclweb.org/anthology/P19-1522.pdf, https://www.aclweb.org/anthology/K19-1061.pdf). They got F1 ranges from 73 - 80. I don't think a CRF will help a lot. Anyone experience the same problem?

edchengg avatar Mar 18 '20 18:03 edchengg

I also got F1 for trigger classification around 69 with BERT + linear classification layer. But this is way below results reported from paper (https://www.aclweb.org/anthology/P19-1522.pdf, https://www.aclweb.org/anthology/K19-1061.pdf). They got F1 ranges from 73 - 80. I don't think a CRF will help a lot. Anyone experience the same problem?

I sent emails to all the authors with 80% experimental results, but no one replied to me. Moreover, there was no detail of hyparameters in the paper, so I still could not reproduce the effect. I think their experimental data is false

Wangpeiyi9979 avatar Oct 06 '20 14:10 Wangpeiyi9979

Remember, most of the announcements in papers which are made by a new author always always are not able to be reproduced.

znsoftm avatar Mar 08 '22 08:03 znsoftm