xlm-roberta topic
UER-py
Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
trankit
Trankit is a Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing
banglabert
This repository contains the official release of the model "BanglaBERT" and associated downstream finetuning code and datasets introduced in the paper titled "BanglaBERT: Language Model Pretraining an...
cino
CINO: Pre-trained Language Models for Chinese Minority (少数民族语言预训练模型)
AILC-lectures2021-lab
This is a Pytorch (+ Huggingface transformers) implementation of a "simple" text classifier defined using BERT-based models. In this lab we will see how it is simple to use BERT for a sentence classif...
Tutorial-Resources
Resources and tools for the Tutorial - "Hate speech detection, mitigation and beyond" presented at ICWSM 2021
TencentPretrain
Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
curated-transformers
🤖 A PyTorch library of curated Transformer models and their composable components
syntaxdot
Neural syntax annotator, supporting sequence labeling, lemmatization, and dependency parsing.
Long-texts-Sentiment-Analysis-RoBERTa
PyTorch implementation of Sentiment Analysis of the long texts written in Serbian language (which is underused language) using pretrained Multilingual RoBERTa based model (XLM-R) on the small dataset.