TensorFlow2.0_Notebooks icon indicating copy to clipboard operation
TensorFlow2.0_Notebooks copied to clipboard

Implementation of a series of Neural Network architectures in TensorFow 2.0

Author: Ivan Bongiorni, Data Scientist at GfK; LinkedIn.

TensorFlow 2.0 Notebooks

This is a collection of my Notebooks on TensorFlow 2.0

The training of models is based on TensorFlow's eager execution method. I'll try to minimize referencese to Keras.

Summary of Contents:

  • Basic feed forward stuff
  • Autoencoders
  • Convolutional Neural Networks
  • Recurrent Neural Networks
  • Applications to NLP


Contents:

Basic feed forward stuff:

  1. Basic classifier: implementation of a feed forward Classifier with simple, full-Batch Gradient Descent in Eager execution.

  2. Mini batch gradient descent: training a model with Mini Batch Gradient Descent.

  3. Save and restore models: how to train a model, save it, then restore it and keep training.

  4. Train a Neural Network with frozen layers


Autoencoders:

  1. Autoencoder for dimensionality reduction: implementation of a stacked Autoencoder for dimensionality reduction of datasets.

  2. Denoising Autoencoder (see CNN section below).

  3. Recurrent Autoencoder (see RNN section below).


Convolutional Neural Networks:

  1. Basic CNN classifier: a basic Convolutional Neural Network for multiclass classification.

  2. Advanced CNN classifier with custom data augmentation.

  3. Mixed-CNN classifier.

  4. Denoising Autoencoder.


Recurrent Neural Networks:

  1. LSTM many-to-one forecast model

  2. LSTM many-to-many forecast model

  3. Multivariate LSTM regression.

  4. Seq2seq models.


RNN + Natural Language Processing

  1. LSTM Text generator from this repository of mine.