Asahi Ushio

Results 10 repositories owned by Asahi Ushio

tner

389
Stars
41
Forks
Watchers

Language model fine-tuning on NER with an easy interface and cross-domain evaluation. "T-NER: An All-Round Python Library for Transformer-based Named Entity Recognition, EACL 2021"

analogy-language-model

23
Stars
4
Forks
Watchers

The official implementation of "BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Identify Analogies?, ACL 2021 main conference"

DeepDomainAdaptation

26
Stars
7
Forks
Watchers

Tensorflow deep learning based domain adaptation model implementations with experiment of estimate MNIST by SVHN data (SVHN -> MNIST): DANN (domain-adversarial neural network), Deep JDOT (joint distri...

kex

54
Stars
4
Forks
Watchers

Kex is a python library for unsupervised keyword extraction from a document, providing an easy interface and benchmarks on 15 public datasets.

lm-question-generation

356
Stars
38
Forks
Watchers

Multilingual/multidomain question generation datasets, models, and python library for question generation.

LSTMCell

30
Stars
8
Forks
Watchers

Implement modern LSTM cell by tensorflow and test them by language modeling task for PTB. Highway State Gating, Hypernets, Recurrent Highway, Attention, Layer norm, Recurrent dropout, Variational drop...

relbert

47
Stars
6
Forks
Watchers

The official implementation of "Distilling Relation Embeddings from Pre-trained Language Models, EMNLP 2021 main conference", a high-quality relation embedding based on language models.

wikiart-image-dataset

56
Stars
3
Forks
Watchers

We release WikiART Crawler, a python-library to download/process images from WikiART via WikiART API, and two image datasets: `WikiART Face` and `WikiART General`.

lm-vocab-trimmer

41
Stars
2
Forks
Watchers

Vocabulary Trimming (VT) is a model compression technique, which reduces a multilingual LM vocabulary to a target language by deleting irrelevant tokens from its vocabulary. This repository contains a...

lmppl

159
Stars
16
Forks
Watchers

Calculate perplexity on a text with pre-trained language models. Support MLM (eg. DeBERTa), recurrent LM (eg. GPT3), and encoder-decoder LM (eg. Flan-T5).