Yan Ling
Yan Ling
These empty pictures are just the raw pictures, but these samples are not included in our three MABSA subtasks. The folders /twitter2017_images and / twitter2015_images are the complete raw data...
You can follow the instructions in https://github.com/jiasenlu/vilbert_beta/tree/master/data. You just need to modify the code in file ./tools/generate_tsv.py. Specifically, change "image_ids" to the list of image paths you want to extract...
是在linux ubuntu环境下的,caffe配置是比较麻烦
The code support the MASC task and you just need to modify a little part during inference. In our experiments, during training, we use the full output format like "entity1_start...
The input target is the same as JMASA task. The difference is only in the inference. For JMASA task, the generation starts from the start token like "bos" "JMASA" and...
输入是的,但评估还是单独算F值的。
输入的时候没有改变,刚刚我没有完全理解你说的意思,encoder端还是只有图像和原始的文本,decoder端训练的时候还是一样的,只是测试的时候对于entity的部分直接送真值而已。
先输入start token, “bos” "MASC" 然后拼上第一个golden entity的span,生成第一个sentiment,然后拼上第二个entity,继续生成
训练的时候输入是完整的,测试的时候一个entity一个entity输入,golden entity1 生成sentiment1 golden entity1 sentiment1 golden entity2然后预测sentiment2 相关代码我后续会更新