vilbert_beta icon indicating copy to clipboard operation
vilbert_beta copied to clipboard

Would you release the multi-task fine-tuning codes for ViL-BERT?

Open yangapku opened this issue 6 years ago • 4 comments

Hi, I have read your new paper "12-in-1: Multi-Task Vision and Language Representation Learning" on Arxiv, which utilizes multi-task fine-tuning to boost the performance of Vil-BERT. May I ask whether you will release this part of code in this repo or in some other places? Thank you very much!

yangapku avatar Dec 07 '19 12:12 yangapku

Hi

Thanks for the interest, yes, We plan to release the code and pretrained model for the new paper (12-in-1). That code will be released under Facebook AI Github, and it's still in the reviewing stage. I think the code and model should be released this month. In the meantime, I'm working on a new open-source multi-modal multi-task transformer (M3Transformer), which is optimized for the new transformer codebase. I will also release this open-source project this month.

jiasenlu avatar Jan 03 '20 20:01 jiasenlu

Great! It's delightful to hear this. I will wait for the release.

yangapku avatar Jan 04 '20 03:01 yangapku

Check out this release! https://github.com/facebookresearch/vilbert-multi-task

jiasenlu avatar Jan 15 '20 20:01 jiasenlu

Thank you for your kind notification! Would you please release the data in this repo as well, like the lmdb files and how to generate features using the new Resnext detector?

yangapku avatar Jan 16 '20 01:01 yangapku