DeepModels
                                
                                 DeepModels copied to clipboard
                                
                                    DeepModels copied to clipboard
                            
                            
                            
                        TensorFlow Implementation of state-of-the-art models since 2012
DeepModels
 
This repository is mainly for implementing and testing state-of-the-art deep learning models since 2012 when AlexNet has emerged. It will provide pre-trained models on each dataset later.
In order to try with state-of-the-art deep learning models, datasets to be fed into and training methods should be also come along. This repository comes with three main parts, Dataset, Model, and Trainer to ease this process.
Dataset and model should be provided to a trainer, and then the trainer knows how to run training, resuming where the last training is left off, and transfer learning.
Dependencies
- numpy >= 1.14.5
- scikit-image >= 0.12.3
- tensorflow >= 1.6
- tqdm >= 4.11.2
- urllib3 >= 1.23
# install all the requirements.
pip install -r requirements.txt
Testing Environment
- macOS High Sierra (10.13.6) + eGPU encloosure (Akitio Node) + NVIDIA GTX 1080Ti
- floydhub + NVIDIA TESLA K80, + NVIDIA TESLA V100
- GCP cloud ML engine + NVIDIA TESLA K80, + NVIDIA TESLA P100, + NVIDIA TESLA V100
Pre-defined Classes
Datasets
- MNIST
- 10 classes of handwritten digits images in size of 28x28
- 60,000 training images, 10,000 testing images
 
- CIFAR-10
- 10 classes of colored images in size of 32x32
- 50,000 training images, 10,000 testing images
- 6,000 images per class
 
- CIFAR-100
- 100 classes of colored images in size of 32x32
- 600 images per class
- 500 training images, 100 testing images per class
 
- Things to be added
Models
- AlexNet | 2012 | [CODE]
- VGG | 2014 | [CODE]
- model types
- A: 11 layers, A-LRN: 11 layers with LRN (Local Response Normalization)
- B: 13 layers, C: 13 layers with additional convolutional layer whose kernel size is 1x1
- D: 16 layers (known as VGG16)
- E: 19 layers (known as VGG19)
 
 
- model types
- Inception V1 (GoogLeNet) | 2014 | [CODE]
- Residual Network | 2015 | [CODE]
- model types (depth): 18, 34, 50, 101, 152
 
- Inception V2 | 2015 | [CODE]
- Inception V3 | 2015 | [CODE]
- Residual Network V2 | 2016 | [CODE]
- model types (depth): 18, 34, 50, 101, 152, 200
 
- Inception V4 | 2016 | [CODE]
- Inception+Resnet V1 | 2016 | [CODE]
- Inception+Resnet V2 | 2016 | [CODE]
- DenseNet | 2017 | [CODE]
- model types (depth): 121, 169, 201, 264
 
- Things to be added
- SqueezeNet | 2016
- MobileNet | 2017
- NASNet | 2017
 
Trainers
- ClfTrainer: Trainer for image classification like ILSVRC
Pre-trained accuracy (coming soon)
- AlexNet
- VGG
- Inception V1 (GoogLeNet)
Example Usage Code Blocks
Define hyper-parameters
learning_rate = 0.0001
epochs = 1
batch_size = 64
Train from nothing
from dataset.cifar10_dataset import Cifar10
from models.googlenet import GoogLeNet
from trainers.clftrainer import ClfTrainer
inceptionv1 = GoogLeNet()
cifar10_dataset = Cifar10()
trainer = ClfTrainer(inceptionv1, cifar10_dataset)
trainer.run_training(epochs, batch_size, learning_rate,
                     './inceptionv1-cifar10.ckpt')
Train from where left off
from dataset.cifar10_dataset import Cifar10
from models.googlenet import GoogLeNet
from trainers.clftrainer import ClfTrainer
inceptionv1 = GoogLeNet()
cifar10_dataset = Cifar10()
trainer = ClfTrainer(inceptionv1, cifar10_dataset)
trainer.resume_training_from_ckpt(epochs, batch_size, learning_rate,
                                  './inceptionv1-cifar10.ckpt-1', './new-inceptionv1-cifar10.ckpt')
Transfer Learning
from dataset.cifar100_dataset import Cifar100
from models.googlenet import GoogLeNet
from trainers.clftrainer import ClfTrainer
inceptionv1 = GoogLeNet()
cifar10_dataset = Cifar100()
trainer = ClfTrainer(inceptionv1, cifar10_dataset)
trainer.run_transfer_learning(epochs, batch_size, learning_rate,
                              './new-inceptionv1-cifar10.ckpt-1', './inceptionv1-ciafar100.ckpt')
Testing
from dataset.cifar100_dataset import Cifar100
from models.googlenet import GoogLeNet
from trainers.clftrainer import ClfTrainer
# prepare images to test
images = ...
inceptionv1 = GoogLeNet()
cifar10_dataset = Cifar100()
trainer = ClfTrainer(inceptionv1, cifar10_dataset)
results = trainer.run_testing(images, './inceptionv1-ciafar100.ckpt-1')
Basic Workflow
- Define/Instantiate a dataset
- Define/Instantiate a model
- Define/Instantiate a trainer with the dataset and the model
- Begin training/resuming/transfer learning