acl-anthology
acl-anthology copied to clipboard
Correction to 2022.case-1.31
Instructions
- Bib ID: hurriyetoglu-etal-2022-extended
- Problem:: The entry in the bib+abstract file is causing the parsing of the bib file to fail.
- Cause: the abstract field has an unclosed open bracelet
{{\&}instead of{\&}
Metadata correction
Replace the abstract text (the only change is the {{\&} -->{\&}) :
from:
abstract = "We report results of the CASE 2022 Shared Task 1 on Multilingual Protest Event Detection. This task is a continuation of CASE 2021 that consists of four subtasks that are i) document classification, ii) sentence classification, iii) event sentence coreference identification, and iv) event extraction. The CASE 2022 extension consists of expanding the test data with more data in previously available languages, namely, English, Hindi, Portuguese, and Spanish, and adding new test data in Mandarin, Turkish, and Urdu for Sub-task 1, document classification. The training data from CASE 2021 in English, Portuguese and Spanish were utilized. Therefore, predicting document labels in Hindi, Mandarin, Turkish, and Urdu occurs in a zero-shot setting. The CASE 2022 workshop accepts reports on systems developed for predicting test data of CASE 2021 as well. We observe that the best systems submitted by CASE 2022 participants achieve between 79.71 and 84.06 F1-macro for new languages in a zero-shot setting. The winning approaches are mainly ensembling models and merging data in multiple languages. The best two submissions on CASE 2021 data outperform submissions from last year for Subtask 1 and Subtask 2 in all languages. Only the following scenarios were not outperformed by new submissions on CASE 2021: Subtask 3 Portuguese {{\&} Subtask 4 English.",
to:
abstract = "We report results of the CASE 2022 Shared Task 1 on Multilingual Protest Event Detection. This task is a continuation of CASE 2021 that consists of four subtasks that are i) document classification, ii) sentence classification, iii) event sentence coreference identification, and iv) event extraction. The CASE 2022 extension consists of expanding the test data with more data in previously available languages, namely, English, Hindi, Portuguese, and Spanish, and adding new test data in Mandarin, Turkish, and Urdu for Sub-task 1, document classification. The training data from CASE 2021 in English, Portuguese and Spanish were utilized. Therefore, predicting document labels in Hindi, Mandarin, Turkish, and Urdu occurs in a zero-shot setting. The CASE 2022 workshop accepts reports on systems developed for predicting test data of CASE 2021 as well. We observe that the best systems submitted by CASE 2022 participants achieve between 79.71 and 84.06 F1-macro for new languages in a zero-shot setting. The winning approaches are mainly ensembling models and merging data in multiple languages. The best two submissions on CASE 2021 data outperform submissions from last year for Subtask 1 and Subtask 2 in all languages. Only the following scenarios were not outperformed by new submissions on CASE 2021: Subtask 3 Portuguese {\&} Subtask 4 English.",
Thanks!
@anthology-assist Can we check if this was already in the metadata delivered to us? I don't think it's something our code could have introduced but I'd like to know for sure.
@roxanneelbaff could you update the anthology id of the paper?
@roxanneelbaff could you update the anthology id of the paper?
Sorry for some reason, I missed this comment.
Why do I need to update the ID? The bug was in the abstract of the entry mentioned above.
I believe what @anthology-assist meant is that we need the Anthology ID of the paper, not the BibTeX key, so it would speed up things if you could edit the title of this issue with the actual Anthology ID.
Hi, let's just figure this out so we can get it processed and cleared?
@mbollmann What we ingested is &, i.e. Subtask 3 Portuguese & Subtask 4 English. I'm looking at the page https://aclanthology.org/2022.case-1.31/ and the & at the end of the abstract seems fine to me, see screenshot below.
Did I miss something? @roxanneelbaff
It seems this has been fixed in the intervening year.
@roxanneelbaff thanks for bringing this to our attention, and sorry that it took so long (and so much back-and-forth) to address. This one seems to have slipped through the cracks.