UMT
UMT copied to clipboard
Question of the MT-bert code
Dear Jeffery,I'm very interested in your work of this paper.I have a little problem about the code.If I want to implement MT-BERT-CRF, which part of the code should I modify?Thank u.
Hi there,
Thanks for your interest in our paper! I am not sure which MT-BERT-CRF you are referring to. But if it is a pre-trained model, you can first add the link of this pre-trained model in line 43 to line 51 of the "mner_modeling.py" file under "my_bert" folder, and then set the parameter "bert_model" to your pre-trained model name in the "run_mtmner_crf.sh" file.
Best, Jianfei
Thank you for your reply. If there are any other questions I would like to discuss with you, I will contact you again. Thanks again
sorry to disturb u again,dear Jeffery.I would like to ask,what the ‘self_attention‘ ‘self_attention_v2’ ‘vismap2text’ and ‘vismap2text_v2’ stand for?
Hi there,
‘self_attention‘ stands for the self attention layer for the auxiliary task (in the left channel of Fig.2.a), whereas ‘self_attention_v2’ stands for the self attention layer for our main MNER task (in the right channel of Fig.2.a). ‘vismap2text’ and ‘vismap2text_v2’ are both used to project the image representation (2048 dimension) into the space of the text representation (768 dimension), and they are respectively employed to produce the Image-Aware Word Representation (in the left channel of Fig.2.b) and the Word-Aware Visual Representation (in the right channel of Fig.2.b).
Best, Jianfei
ok I got it.Thanks for your patience!!!
I am sorry to disturb u again,dear Jeffery.I have the problems about the some results,which I use the some version with you.I don't know if it's a matter of warning. Will it affect the final result?How should I deal with it?