deep-diacritization
deep-diacritization copied to clipboard
Official Repository of the Deep Diacritization Paper
Deep Diacritization: Efficient Hierarchical Recurrence for Improved Arabic Diacritization
[Demo], [ACL], [arXiv], [Research Gate], [Papers with Code], [Slides]
We propose a novel architecture for labelling character sequences that achieves state-of-the-art results on the Tashkeela Arabic diacritization benchmark. The core is a two-level recurrence hierarchy that operates on the word and character levels separately---enabling faster training and inference than comparable traditional models. A cross-level attention module further connects the two, and opens the door for network interpretability. The task module is a softmax classifier that enumerates valid combinations of diacritics. This architecture can be extended with a recurrent decoder that optionally accepts priors from partially diacritized text, improving performance significantly. We employ extra tricks such as sentence dropout and majority voting to further boost the final result. Our best model achieves a WER of 5.34%, outperforming the previous state-of-the-art with a 30.56% relative error reduction.
Results on the Tashkeela Benchmark
DER/WER | Including 'no diacritic' | Excluding 'no diacritic' | ||
---|---|---|---|---|
w/ case ending | w/o case ending | w/ case ending | w/o case ending | |
Barqawi, 2017 | 3.73% / 11.19% | 2.88% / 6.53% | 4.36% / 10.89% | 3.33% / 6.37% |
Fadel et al., 2019 | 2.60% / 7.69% | 2.11% / 4.57% | 3.00% / 7.39% | 2.42% / 4.44% |
Abbad and Xiong, 2020 | 3.39% / 9.94% | 2.61% / 5.83% | 3.34% / 7.98% | 2.43% / 3.98% |
D2 (Ours) | 1.85% / 5.53% | 1.49% / 3.27% | 2.11% / 5.26% | 1.71% / 3.15% |
D3 (Ours) | 1.83% / 5.34% | 1.48% / 3.11% | 2.09% / 5.08% | 1.69% / 3.00% |
Step-by-Step Guide
1. Download Tashkeela
bash scripts/download_tashkeela.sh
2. Download fastText Arabic CC Binary
bash scripts/download_fasttext_ar.sh
3. Segment Datasets
bash scripts/segment_train_val.sh
bash scripts/segment_test.sh
4. Extract and Embed Tashkeela Vocabulary
bash scripts/embed_vocab.sh
5. Train Model
bash scripts/train.sh d2
6. Predict then Evaluate Model
bash scripts/evaluate.sh d2
Pretrained Models
Citation
This work was accepted at the Fifth Arabic Natural Language Processing Workshop (COLING/WANLP 2020)
@inproceedings{alkhamissi-etal-2020-dd,
title = "Deep Diacritization: Efficient Hierarchical Recurrence for Improved {A}rabic Diacritization",
author = "AlKhamissi, Badr and
ElNokrashy, Muhammad and
Gabr, Mohamed",
booktitle = "Proceedings of the Fifth Arabic Natural Language Processing Workshop",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.wanlp-1.4",
pages = "38--48",
abstract = "We propose a novel architecture for labelling character sequences that achieves state-of-the-art results on the Tashkeela Arabic diacritization benchmark. The core is a two-level recurrence hierarchy that operates on the word and character levels separately{---}enabling faster training and inference than comparable traditional models. A cross-level attention module further connects the two and opens the door for network interpretability. The task module is a softmax classifier that enumerates valid combinations of diacritics. This architecture can be extended with a recurrent decoder that optionally accepts priors from partially diacritized text, which improves results. We employ extra tricks such as sentence dropout and majority voting to further boost the final result. Our best model achieves a WER of 5.34{\%}, outperforming the previous state-of-the-art with a 30.56{\%} relative error reduction.",
}