deep-learning-cheatsheet
deep-learning-cheatsheet copied to clipboard
Deep Learning Cheatsheet
General | Graphs
Introduction
- Machine Learning Glossary
- Essential Machine Learning Cheatsheets
- Neural Networks and Deep Learning [Free Online Book]
- Free Deep Learning Book [MIT Press]
- Andrew Ng's machine learning course at Coursera [Material]
- Deep Learning by Google
- Deep Learning in Neural Networks: An Overview
- How To Become A Machine Learning Engineer: Learning Path
- Awesome Deep Vision
- A Guide to Deep Learning
- Deep Learning Weekly [Weekly Newsletter Subscription]
- A Year of Artificial Intelligence [Blog]
- Introduction to Convolutional Neural Networks
- Applied Deep Learning
- Machine Learning is Fun! [Medium Series]
- Machine Learning for Humans [Medium Series]
- My Neural network ins't working! What should I do?
- The math of neural networks
- Everything you need to know about Neural Networks
Python
- Learn Python the hard way
- Python Data Science Handbook
- Styleguide [PEP8]
- TensorFlow [Open Source Library]
- PyCUDA
- Cython
TensorFlow
- Official
- Adding a New Op [Official]
- TensorFlow in a Nutshell
- Source Code Examples
- Effictive TensorFlow [TensorFlow tutorials and best practices]
PyTorch
Conferences
Architectures
- Very Deep Convolutional Networks for Large-Scale Image Recognition [VGG]
- SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and 0.5MB model size
- Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition
- Dynamically Expandable Neural Networks
AutoEncoder and GANs
- Generative Adversarial Networks (GANs): Engine and Applications
- GANs are Broken in More than One Way: The Numerics of GANs
- Towards data set augmentation with GANs
- How does the unpooling and deconvolution work in DeConvNet
- What are deconvoltional layers?
- Visualizing and Understanding Convolutional Networks [Original Unpooling Paper]
Optimizer
- A method for stochastic optimization [Adam Optimizer]
Overfitting
- Dropout: A Simple Way to Prevent Neural Networks from Overfitting
- A Simple Weight Decay Can Improve Generalization
- Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift