nlp-for-malyalam icon indicating copy to clipboard operation
nlp-for-malyalam copied to clipboard

State of the Art Language models and Classifier for Malayalam, which is spoken by the Malayali people in the Indian state of Kerala and the union territories of Lakshadweep and Puducherry

NLP for Malayalam

This repository contains State of the Art Language models and Classifier for Malayalam, which is spoken by the Malayali people in the Indian state of Kerala and the union territories of Lakshadweep and Puducherry.

The models trained here have been used in Natural Language Toolkit for Indic Languages (iNLTK)

Dataset

Created as part of this project

  1. Malayalam Wikipedia Articles

  2. Malayalam News Dataset

Open Source Datasets

  1. iNLTK Headlines Corpus - Malayalam : Uses the Malayalam News Dataset prepared above

Results

Language Model Perplexity (on validation set)

Architecture/Dataset Malayalam Wikipedia Articles
ULMFiT 26.39
TransformerXL 25.79

Classification Metrics

ULMFiT
Dataset Accuracy MCC Notebook to Reproduce results
iNLTK Headlines Corpus - Malayalam 95.56 93.29 Link

Visualizations

Word Embeddings
Architecture Visualization
ULMFiT Embeddings projection
TransformerXL Embeddings projection

Results of using Transfer Learning + Data Augmentation from iNLTK

On using complete training set (with Transfer learning)
Dataset Dataset size (train, valid, test) Accuracy MCC Notebook to Reproduce results
iNLTK Headlines Corpus - Malayalam (5036, 630, 630) 95.56 93.29 Link
On using 10% of training set (with Transfer learning)
Dataset Dataset size (train, valid, test) Accuracy MCC Notebook to Reproduce results
iNLTK Headlines Corpus - Malayalam (503, 630, 630) 82.38 73.47 Link
On using 10% of training set (with Transfer learning + Data Augmentation)
Dataset Dataset size (train, valid, test) Accuracy MCC Notebook to Reproduce results
iNLTK Headlines Corpus - Malayalam (503, 630, 630) 84.29 76.36 Link

Pretrained Models

Language Models

Download pretrained Language Model from here

Tokenizer

Trained tokenizer using Google's sentencepiece

Download the trained model and vocabulary from here