AGI-Papers
AGI-Papers copied to clipboard
Papers and Book to look at when starting AGI ๐
๐ NLP-Papers ๐
NLP
ยท NLU
ยท NLG
Summarization
ยท Sentiment analysis
NER
ยท POS
ยท MNT
ยท QA
Text categorization
ยท Semantic parsing
GOTO PyTorch!
Papers
-
[2013/01] Efficient Estimation of Word Representations in Vector Space
-
[2014/12] Dependency-Based Word Embeddings
-
[2015/07] Neural Machine Translation of Rare Words with Subword Units
-
[2014/07] GloVe: Global Vectors for Word Representation : GloVe
-
[2016/06] Siamese CBOW: Optimizing Word Embeddings for Sentence Representations : Siamese CBOW
-
[2016/07] Enriching Word Vectors with Subword Information : fastText
-
[2014/09] Sequence to Sequence Learningwith Neural Networks : seq2seq
-
[2017/07] Attention Is All You Need : Transformer
-
[2017/08] Learned in Translation: Contextualized Word Vectors : CoVe
-
[2018/01] Universal Language Model Fine-tuning for Text Classification : ULMFIT
-
[2018/02] Deep contextualized word representations : ELMo
-
[2018/06] Improving Language Understanding by Generative Pre-Training : GPT-1
-
[2018/10] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding : BERT
-
[2019/02] Language Models are Unsupervised Multitask Learners : GPT-2
-
[2019/04] Language Models with Transformers
-
[2019/01] Cross-lingual Language Model Pretraining XLM
-
[2019/01] Multi-Task Deep Neural Networks for Natural Language Understanding : MT-DNN
-
[2019/01] Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context : Transformer-XL
-
[2019/06] XLNet: Generalized Autoregressive Pretraining for Language Understanding : XLNet
-
[2019/09] Fine-Tuning Language Models from Human Preferences
-
[2019/01] BioBERT: a pre-trained biomedical language representation model for biomedical text mining : BioBERT
-
[2019/03] SciBERT: A Pretrained Language Model for Scientific Text : SciBERT
-
[2019/04] ClinicalBERT: Modeling Clinical Notes and Predicting Hospital Readmission : ClinicalBERT
-
[2019/06] HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization : HIBERT
-
[2019/07] SpanBERT: Improving Pre-training by Representing and Predicting Spans : SpanBERT
-
[2019/08] Pre-Training with Whole Word Masking for Chinese BERT
-
[2019/07] R-Transformer: Recurrent Neural Network Enhanced Transformer : R-Transformer
-
[2019/09] FREELB: ENHANCED ADVERSARIAL TRAINING FOR LANGUAGE UNDERSTANDING : FREELB
-
[2019/09] Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
-
[2019/10] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer : T5
-
[2018/07] Subword-level Word Vector Representations for Korean
-
[2019/08] Zero-shot Word Sense Disambiguation using Sense Definition Embeddings
-
[2019/06] Bridging the Gap between Training and Inference for Neural Machine Translation
-
[2019/06] Emotion-Cause Pair Extraction: A New Task to Emotion Analysis in Texts
-
[2019/07] A Simple Theoretical Model of Importance for Summarization
-
[2019/05] Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems
-
[2019/07] We need to talk about standard splits
-
[2019/07] ERNIE 2.0: A Continual Pre-training Framework for Language Understanding : ERNIE 2.0
-
[2019/07] Multi-Task Deep Neural Networks for Natural Language Understanding : mt-dnn
-
[2019/05] SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems : SuperGLUE
-
[2020/01] Towards a Human-like Open-Domain Chatbot + Google AI Blog
-
[2020/03] ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators : ELECTRA
-
[2019/04] Mask-Predict: Parallel Decoding of Conditional Masked Language Models : Mask-Predict
-
[2020/01] Reformer: The Efficient Transformer : Reformer
-
[2020/04] Longformer: The Long-Document Transformer : Longformer
-
[2019/11] DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation : DialoGPT
-
[2020/01] Towards a Human-like Open-Domain Chatbot
-
[2020/04] You Impress Me: Dialogue Generation via Mutual Persona Perception
-
[2020/04] ToD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogues : ToD-BERT
-
[2020/04] SOLOIST: Few-shot Task-Oriented Dialog with A Single Pre-trained Auto-regressive Model : SOLOIST
-
[2020/05] A Simple Language Model for Task-Oriented Dialogue
-
[2019/07] ReCoSa: Detecting the Relevant Contexts with Self-Attention for Multi-turn Dialogue Generation : ReCoSa
-
[2020/04] FastBERT: a Self-distilling BERT with Adaptive Inference Time : FastBERT
-
[2020/01] PoWER-BERT: Accelerating BERT inference for Classification Tasks : PoWER-BERT
-
[2019/10] DistillBERT, a distilled version of BERT: smaller, faster, cheaper and lighter : DistillBERT
-
[2019/10] TinyBERT: Distilling BERT for Natural Language Understanding : TinyBERT
-
[2018/12] Conditional BERT Contextual Augmentation
-
[2020/03] Data Augmentation using Pre-trained Transformer Models
-
[2020/04] FLAT: Chinese NER Using Flat-Lattice Transformer : FLAT
-
[2019/12] Big Transfer (BiT): General Visual Representation Learning : BiT
-
[] : **
Basic knowledge
mathematics | machine learning |
---|---|
mathematics for machine learning | Pattern Recognition and Machine Learning |