awesome-nlp-papers
awesome-nlp-papers copied to clipboard
A collection/reading-list of awesome Natural Language Processing papers sorted by date.
Awesome NLP Papers
This is a collection/reading-list of awesome Natural Language Processing papers sorted by date.
2018
-
[X] Unsupervised Machine Translation Using Monolingual Corpora Only, Lample et al.
Paper
-
[X] On the Dimensionality of Word Embeddings, Yin et al.
Paper
-
[X] An efficient framework for learning sentence representations, Logeswaran et al.
Paper
-
[X] Refining Pretrained Word Embeddings Using Layer-wise Relevance Propagation, Akira Utsumi
Paper
-
[X] Domain Adapted Word Embeddings for Improved Sentiment Classification, Sarma et al.
Paper
-
[X] In-domain Context-aware Token Embeddings Improve Biomedical Named Entity Recognition, Sheikhshab et al.
Paper
-
[X] Generalizing Word Embeddings using Bag of Subwords, Zhao et al.
Paper
-
[X] What's in Your Embedding, And How It Predicts Task Performance, Rogers et al.
Paper
-
[X] On Learning Better Word Embeddings from Chinese Clinical Records: Study on Combining In-Domain and Out-Domain Data Wang et al.
Paper
-
[X] Predicting and interpreting embeddings for out of vocabulary words in downstream tasks, Garneau et al.
Paper
-
[X] Addressing Low-Resource Scenarios with Character-aware Embeddings, Papay et al.
Paper
-
[X] Domain Adaptation for Disease Phrase Matching with Adversarial Networks, Liu et al.
Paper
-
[X] Investigating Effective Parameters for Fine-tuning of Word Embeddings Using Only a Small Corpus, Komiya et al.
Paper
-
[X] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, Devlin et al.
Paper
-
[X] Adapting Word Embeddings from Multiple Domains to Symptom Recognition from Psychiatric Notes, Zhang et al.
Paper
-
[ ] Evaluation of sentence embeddings in downstream and linguistic probing tasks, Perone et al.
Paper
-
[ ] Universal Sentence Encoder, Cer et al.
Paper
-
[X] Deep Contextualized Word Representations, Peters et al.
Paper
-
[X] Learned in Translation: Contextualized Word Vectors, McCann et al.
Paper
-
[X] Concatenated p-mean Word Embeddings as Universal Cross-Lingual Sentence Representations, Rücklé et al.
paper
-
[X] A Compressed Sensing View of Unsupervised Text Embeddings, Bag-Of-n-Grams, and LSTMs, Arora et al.
Paper
2017:
-
[X] Attention Is All You Need, Vaswani et al.
Paper
-
[X] Skip-Gram – Zipf + Uniform = Vector Additivity, Gittens et al.
Paper
-
[X] A Simple but Tough-to-beat Baseline for Sentence Embeddings, Arora et al.
Paper
-
[X] Fast and Accurate Entity Recognition with Iterated Dilated Convolutions, Strubell et al.
Paper
-
[X] Advances in Pre-Training Distributed Word Representations, Mikolov et al.
Paper
-
[X] Replicability Analysis for Natural Language Processing: Testing Significance with Multiple Datasets, Dror et al.
Paper
2016:
-
[X] Towards Universal Paraphrastic Sentence Embeddings, Wieting et al.
Paper
-
[X] Bag of Tricks for Efficient Text Classification, Joulin et al.
Paper
-
[X] Enriching Word Vectors with Subword Information, Bojanowski et al.
Paper
-
[X] Assessing the Corpus Size vs. Similarity Trade-off for Word Embeddings in Clinical NLP, Kirk Roberts
Paper
-
[X] How to Train Good Word Embeddings for Biomedical NLP, Chiu et al.
Paper
-
[X] Log-Linear Models, MEMMs, and CRFs, Michael Collins
Paper
-
[X] Counter-fitting Word Vectors to Linguistic Constraints, Mrkšić et al.
Paper
-
[X] Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation, Wu et al.
Paper
2015:
-
[ ] Semi-supervised Sequence Learning, Dai et al.
Paper
-
[X] Evaluating distributed word representations for capturing semantics of biomedical concepts, Th et al.
Paper
2014:
-
[X] GloVe: Global Vectors for Word Representation, Pennington et al.
Paper
-
[X] Linguistic Regularities in Sparse and Explicit Word Representations, Levy and Goldberg.
Paper
-
[X] Neural Word Embedding as Implicit Matrix Factorization, Levy and Goldberg.
Paper
-
[X] word2vec Explained: deriving Mikolov et al.'s negative-sampling word-embedding method, Goldberg and Levy.
Paper
-
[X] What’s in a p-value in NLP?, Søgaard et al.
Paper
-
[X] How transferable are features in deep neural networks?, Yosinski et al.
Paper
-
[X] Improving lexical embeddings with semantic knowledge, Yu et al.
Paper
-
[X] Retrofitting word vectors to semantic lexicons, Faruqui et al.
Paper
2013:
-
[X] Efficient Estimation of Word Representations in Vector Space, Mikolov et al.
Paper
-
[X] Linguistic Regularities in Continuous Space Word Representations, Mikolov et al.
Paper
-
[X] Distributed Representations of Words and Phrases and their Compositionality, Mikolov et al.
Paper
2012:
- [X] An Empirical Investigation of Statistical Significance in NLP, Berg-Kirkpatrick et al.
Paper
2010:
- [X] Word representations: A simple and general method for semi-supervised learning, Turian et al.
Paper
2008:
- [ ] A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning, Collobert and Weston.
Paper
2006:
- [X] Domain adaptation with structural correspondence learning, Blitzer et al.
Paper
2003:
- [X] A Neural Probabilistic Language Model, Bengio et al.
Paper
1986:
- [ ] Distributed Representations, Hinton et al.
Paper