CS231n
CS231n copied to clipboard
Working through CS231n: Convolutional Neural Networks for Visual Recognition
Understanding Convolutional Neural Networks
This repository is an archive of the course CS231n: Convolutional Neural Networks for Visual Recognition (Winter 2016). If you’re even vaguely interested in this topic, you should probably take this class. It is outstanding.
To use this repository, make a fork of
it and then
tick off the items in the following syllabus as you complete
them. (You can tick off items by replacing [ ]
with [x]
in
README.md
.)
Happy learning!
Harish Narayanan, 2017
Course Syllabus
- [ ] Lecture 1: Intro to computer vision, historical context
- [ ] Video
- [ ] Slides
- [ ] Lecture 2: Image classification and the data-driven
approach; k-nearest neighbors; Linear classification I
- [ ] Video
- [ ] Slides
- [ ] Python/NumPy tutorial
- [ ] Image classification notes
- [ ] Linear classification notes
- [ ] Lecture 3: Linear classification II; Higher-level
representations, image features; Optimization, stochastic
gradient descent
- [ ] Video
- [ ] Slides
- [ ] Linear classification notes
- [ ] Optimization notes
- [ ] Lecture 4: Backpropagation; Introduction to neural networks
- [ ] Video
- [ ] Slides
- [ ] Backprop notes
- [ ] Related references
- [ ] Efficient Backprop -- 3/44
- [ ] Automatic differentiation survey
- [ ] Calculus on Computational Graphs
- [ ] Backpropagation Algorithm
- [ ] Learning: Neural Nets, Back Propagation
- [ ] Assignment 1
- [ ] k-Nearest Neighbor classifier
- [ ] Training a Support Vector Machine
- [ ] Implement a Softmax classifier
- [ ] Two-Layer Neural Network
- [ ] Higher Level Representations: Image Features
- [ ] Cool Bonus: Do something extra!
- [ ] Lecture 5: Training Neural Networks Part 1; Activation
functions, weight initialization, gradient flow, batch
normalization; Babysitting the learning process, hyperparameter
optimization
- [ ] Video
- [ ] Slides
- [ ] Neural Nets notes 1
- [ ] Neural Nets notes 2
- [ ] Neural Nets notes 3
- [ ] Related references
- [ ] Tips/Tricks 1
- [ ] Tips/Tricks 2
- [ ] Tips/Tricks 3
- [ ] Deep learning review article
- [ ] Lecture 6: Training Neural Networks Part 2: parameter
updates, ensembles, dropout; Convolutional Neural Networks:
intro
- [ ] Video
- [ ] Slides
- [ ] Neural Nets notes 3
- [ ] Lecture 7: Convolutional Neural Networks: architectures,
convolution / pooling layers; Case study of ImageNet challenge
winning ConvNets
- [ ] Video
- [ ] Slides
- [ ] ConvNet notes
- [ ] Lecture 8: ConvNets for spatial localization; Object
detection
- [ ] Video
- [ ] Slides
- [ ] Lecture 9: Understanding and visualizing Convolutional
Neural Networks; Backprop into image: Visualizations, deep
dream, artistic style transfer; Adversarial fooling examples
- [ ] Video
- [ ] Slides
- [ ] Assignment 2
- [ ] Fully-connected Neural Network
- [ ] Batch Normalization
- [ ] Dropout
- [ ] ConvNet on CIFAR-10
- [ ] Do something extra!
- [ ] Lecture 10: Recurrent Neural Networks (RNN), Long Short Term
Memory (LSTM); RNN language models; Image captioning
- [ ] Video
- [ ] Slides
- [ ] Related references
- [ ] Recurrent neural networks
- [ ] Min Char RNN
- [ ] Char RNN
- [ ] NeuralTalk2
- [ ] Lecture 11: Training ConvNets in practice; Data
augmentation, transfer learning; Distributed training, CPU/GPU
bottlenecks; Efficient convolutions
- [ ] Video
- [ ] Slides
- [ ] Lecture 12: Overview of Caffe/Torch/Theano/TensorFlow
- [ ] Video
- [ ] Slides
- [ ] Assignment 3
- [ ] Image Captioning with Vanilla RNNs
- [ ] Image Captioning with LSTMs
- [ ] Image Gradients: Saliency maps and Fooling Images
- [ ] Image Generation: Classes, Inversion, DeepDream
- [ ] Do something extra!
- [ ] Lecture 13: Segmentation; Soft attention models; Spatial
transformer networks
- [ ] Video
- [ ] Slides
- [ ] Lecture 14: ConvNets for videos; Unsupervised learning
- [ ] Video
- [ ] Slides
- [ ] Invited Lecture: A sampling of deep learning at Google
- [ ] Video
- [ ] Lecture 15: Conclusions
- [ ] Slides