Deep-Learning-Paper-Review-and-Practice icon indicating copy to clipboard operation
Deep-Learning-Paper-Review-and-Practice copied to clipboard

꼼꼼한 딥러닝 논문 리뷰와 코드 실습

꼼꼼한 딥러닝 논문 리뷰와 코드 실습: Deep Learning Paper Review and Practice

  • 꼼꼼한 딥러닝 논문 리뷰와 코드 실습을 위한 저장소입니다.
  • 최신 논문 위주로, 많은 인기를 끌고 있는 다양한 딥러닝 논문을 소개합니다.
  • 질문 사항은 본 저장소의 이슈(Issues) 탭에 남겨주세요.

Image Recognition (이미지 인식)

Natural Language Processing (자연어 처리)

  • Single Headed Attention RNN: Stop Thinking With Your Head (2020)
  • BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (NAACL 2019)
  • Attention is All You Need (NIPS 2017)
  • Neural Machine Translation by Jointly Learning to Align and Translate (ICLR 2015 Oral)
  • Show and Tell: A Neural Image Caption Generator (CVPR 2015)
  • Sequence to Sequence Learning with Neural Networks (NIPS 2014)

Generative Model & Super Resolution (생성 모델 & 해상도 복원)

Modeling & Optimization (모델링 & 최적화)

  • Bag of Tricks for Image Classification (CVPR 2019)
    • Original Paper Link / Paper Review Video / Summary PDF
    • CIFAR-10 / CIFAR-10 with Label Smoothing / CIFAR-10 with Input Mixup / CIFAR-10 with Label Smoothing and Input Mixup
  • Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding (ICLR 2016 Oral)
  • Batch normalization: Accelerating deep network training by reducing internal covariate shift (PMLR 2015)

Adversarial Examples & Backdoor Attacks (적대적 예제 & 백도어 공격)

  • HopSkipJumpAttack: A Query-Efficient Decision-Based Attack (S&P 2020)
  • Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates (ICLR 2020)
  • Sign-OPT: A Query-Efficient Hard-label Adversarial Attack (ICLR 2020)
  • Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment (AAAI 2020 Oral)
  • Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach (ICLR 2019)
  • Boosting Adversarial Attacks with Momentum (CVPR 2018 Spotlight)
  • Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks (NIPS 2018)
  • Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models (ICLR 2018)

지난 논문 리뷰 콘텐츠