RoBERTa on SuperGLUE's 'Multi-Sentence Reading Comprehension' task
MultiRC is one of the tasks of the SuperGLUE benchmark. The task is to re-trace the steps of Facebook's RoBERTa paper (https://arxiv.org/pdf/1907.11692.pdf) and build an AllenNLP config that reads the MultiRC data and fine-tunes a model on it. We expect scores in the range of their entry on the SuperGLUE leaderboard.
This can be formulated as a multiple choice task, using the TransformerMCTransformerToolkit model, analogous to the PIQA model. You can start with the experiment config and dataset reading step from PIQA, and adapt them to your needs.
Hi team,
Thank you for building this fantastic framework, and I am a big fan of it. Hence I'd love to dedicate my first contribution of open-source to AllenNLP.
I am interested in this task, and is this issue welcome to be contributed?
I don't have any experience with QA tasks but do have some experience adopting AllenNLP to solve NER and classification problems. Could you give me some suggestions on how I can start? Also, I would really appreciate it if you could recommend some materials about QA tasks (maybe papers, github repos or tutors).
Many thanks!