r_neural_network_deep_learning icon indicating copy to clipboard operation
r_neural_network_deep_learning copied to clipboard

This is a curated list of libraries and frameworks for Neural network and deep learning in R

Neural network and Deep learning libraries for R

This is a curated list of libraries and frameworks for Neural network and deep learning in R. Please feel free to contribute.

package description
fastai The fastai package provides R wrappers to fastai. The fastai library simplifies training fast and accurate neural nets using modern best practices. The library is based on research into deep learning best practices undertaken at fast.ai, and includes "out of the box" support for vision, text, tabular, audio, time-series and collab (collaborative filtering) models.
torch Direct binding to libtorch C++ library (that also powers pytorch). These set of packages (https://github.com/mlverse) are an attempt to create an ecosystem like pytorch
platypus R package for object detection and image segmentation using YOLOv3 and U-net
nnlib2Rcpp Another collection of neural networks. Includes versions of 'BP', 'Autoencoder', 'LVQ' (supervised and unsupervised), 'MAM'
nntrf A supervised transformation of datasets is performed. The aim is similar to that of Principal Component Analysis (PCA), that is, to carry out data transformation and dimensionality reduction, but in a non-linear supervised way. This is achieved by first training a 3-layer Multi-Layer Perceptron and then using the activations of the hidden layer as a transformation of the input features. In fact, it takes advantage of the change of representation provided by the hidden layer of a neural network. This can be useful as data pre-processing for Machine Learning methods in general, specially for those that do not work well with many irrelevant or redundant features. Rumelhart, D.E., Hinton, G.E. and Williams, R.J. (1986) "Learning representations by back-propagating errors" doi:10.1038/323533a0
nnetsauce Statistical/Machine Learning, using advanced combinations of randomized and quasi-randomized neural networks layers. It contains models for regression, classification, and time series forecasting
rTorch 'R' implementation and interface of the Machine Learning platform 'PyTorch' https://pytorch.org/ developed in 'Python'. It requires a 'conda' environment with 'torch' and 'torchvision' to provide 'PyTorch' functions, methods and classes. The key object in 'PyTorch' is the tensor which is in essence a multidimensional array. These tensors are fairly flexible to perform calculations in CPUs as well as 'GPUs' to accelerate the process
GRnnet implementation of GRNN ( General Regression Neural Network (Specht, 1991) )
NeuralSens Analysis functions to quantify inputs importance in neural network models. Functions are available for calculating and plotting the inputs importance and obtaining the activation function of each neuron layer and its derivatives. The importance of a given input is defined as the distribution of the derivatives of the output with respect to that input in each training data point.
deepNN Implementation of some Deep Learning methods. Includes multilayer perceptron, different activation functions, regularisation strategies, stochastic gradient descent and dropout
ruta unsupervised deep neural networks, from building their architecture to their training and evaluation built on top of keras and tensorflow
kerasformula Adds a high-level interface for 'keras' neural nets. kms() fits neural net and accepts R formulas to aid data munging and hyperparameter selection. kms() can optionally accept a compiled keras_sequential_model() from 'keras'. kms() accepts a number of parameters (like loss and optimizer) and splits the data into sparse test and training matrices. kms() returns a single object with predictions, a confusion matrix, and function call details
BoltzMM Provides probability computation, data generation, and model estimation for fully-visible Boltzmann machines. It follows the methods described in Nguyen and Wood (2016a) doi:10.1162/NECO_a_00813 and Nguyen and Wood (2016b) doi:10.1109/TNNLS.2015.2425898
keras Rstudio's R keras interface
kerasR R interface to the keras library by Taylor Arnold
tensorflow Interface to tensorflow
nnet Software for feed-forward neural networks with a single hidden layer, and for multinomial log-linear models
neuralnet Training of neural networks using backpropagation, resilient backpropagation with (Riedmiller, 1994) or without weight backtracking (Riedmiller and Braun, 1993) or the modified globally convergent version by Anastasiadis et al. (2005). The package allows flexible settings through custom-choice of error and activation function. Furthermore, the calculation of generalized weights (Intrator O & Intrator N, 1993) is implemented
NeuralNetTools Visualization and analysis tools to aid in the interpretation of neural network models. Functions are available for plotting, quantifying variable importance, conducting a sensitivity analysis, and obtaining a simple list of model weights
RSNNS The Stuttgart Neural Network Simulator (SNNS) is a library containing many standard implementations of neural networks. This package wraps the SNNS functionality to make it available from within R. Using the RSNNS low-level interface, all of the algorithmic functionality and flexibility of SNNS can be accessed. Furthermore, the package contains a convenient high-level interface, so that the most common neural network topologies and learning algorithms integrate seamlessly into R
FCNN4R Provides an interface to kernel routines from the FCNN C++ library. FCNN is based on a completely new Artificial Neural Network representation that offers unmatched efficiency, modularity, and extensibility. FCNN4R provides standard teaching (backpropagation, Rprop, simulated annealing, stochastic gradient) and pruning algorithms (minimum magnitude, Optimal Brain Surgeon), but it is first and foremost an efficient computational engine. Users can easily implement their algorithms by taking advantage of fast gradient computing routines, as well as network reconstruction functionality (removing weights and redundant neurons, reordering inputs, merging networks). Networks can be exported to C functions in order to integrate them into virtually any software solution.
softmaxreg Implementation of 'softmax' regression and classification models with multiple layer neural network. It can be used for many tasks like word embedding based document classification, 'MNIST' dataset handwritten digit recognition and so on. Multiple optimization algorithm including 'SGD', 'Adagrad', 'RMSprop', 'Moment', 'NAG', etc are also provided.
DARCH The darch package is built on the basis of the code from G. E. Hinton and R. R. Salakhutdinov (available under Matlab Code for deep belief nets). This package is for generating neural networks with many layers (deep architectures) and train them with the method introduced by the publications "A fast learning algorithm for deep belief nets" (G. E. Hinton, S. Osindero, Y. W. Teh 2006) and "Reducing the dimensionality of data with neural networks" (G. E. Hinton, R. R. Salakhutdinov (2006). This method includes a pre training with the contrastive divergence method published by G.E Hinton (2002) and a fine tuning with common known training algorithms like backpropagation or conjugate gradients. Additionally, supervised fine-tuning can be enhanced with maxout and dropout, two recently developed techniques to improve fine-tuning for deep learning.
deeplearning An implementation of deep neural network with rectifier linear units trained with stochastic gradient descent method and batch normalization. A combination of these methods have achieved state-of-the-art performance in ImageNet classification by overcoming the gradient saturation problem experienced by many deep architecture neural network models in the past. In addition, batch normalization and dropout are implemented as a means of regularization. The deeplearning package is inspired by the darch package and uses its class DArch.
deepnet Implement some deep learning architectures and neural network algorithms, including BP,RBM,DBN,Deep autoencoder and so on
rnn Implementation of a Recurrent Neural Network in R including GRU, LSTM
rbm Restricted Boltzmann Machines in R(also see Landgraf's implementaion)
RcppDL This package is based on the C++ code from Yusuke Sugomori, which implements basic machine learning methods with many layers (deep learning), including dA (Denoising Autoencoder), SdA (Stacked Denoising Autoencoder), RBM (Restricted Boltzmann machine) and DBN (Deep Belief Nets).
h20 DeepLearning Deep Learning: Model high-level abstractions in data by using non-linear transformations in a layer-by-layer method. Deep learning is an example of unsupervised learning and can make use of unlabeled data that other algorithms cannot. Also see, deepwater
mxnet Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler
TensorFlow TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.
autoencoder Implementation of the sparse autoencoder in R environment, following the notes of Andrew Ng. The features learned by the hidden layer of the autoencoder (through unsupervised learning of unlabeled data) can be used in constructing deep belief neural networks
SAENET An implementation of a stacked sparse autoencoder for dimension reduction of features and pre-training of feed-forward neural networks with the 'neuralnet' package is contained within this package. The package also includes a predict function for the stacked autoencoder object to generate the compressed representation of new data if required. For the purposes of this package, 'stacked' is defined in line with http://ufldl.stanford.edu/wiki/index.php/Stacked_Autoencoders . The underlying sparse autoencoder is defined in the documentation of 'autoencoder'.
nnetpredint Computing prediction intervals of neural network models (e.g.backpropagation) at certain confidence level. It can take the output from models trained by other packages like 'nnet', 'neuralnet', 'RSNNS', etc
deepr An R package to streamline the training, fine-tuning and predicting processes for deep learning based on darch and deepnet
pnn Probabilistic neural networks: The program pnn implements the algorithm proposed by Specht (1990). It is written in the R statistical language. It solves a common problem in automatic learning. Knowing a set of observations described by a vector of quantitative variables, we classify them in a given number of groups. Then, the algorithm is trained with this datasets and should guess afterwards the group of any new observation. This neural network has the main advantage to begin generalization instantaneously even with a small set of known observations. It is delivered with four functions (learn, smooth, perf and guess) and a dataset. The functions are documented with examples and provided with unit tests.
qrnn Quantile Regression Neural Network: Fit a quantile regression neural network with optional left censoring using a variant of the finite smoothing algorithm.
validann Validation Tools for Artificial Neural Networks. Methods and tools for analysing and validating the outputs and modelled functions of artificial neural networks (ANNs) in terms of predictive, replicative and structural validity. Also provides a method for fitting feed-forward ANNs with a single hidden layer.
TeachNet Fits neural networks to learn about back propagation. Can fit neural networks with up to two hidden layer and two different error functions. Also able to handle a weight decay. But just able to compute one output neuron and very slow
neural RBF and MLP neural networks with graphical user interface
monmlp Monotone Multi-Layer Perceptron Neural Network. Train and make predictions from a multi-layer perceptron neural network with partial monotonicity constraints.
learNN Examples of Neural Networks. Implementations of several basic neural network concepts in R, as based on posts on http://qua.st/
grnn The program GRNN implements the algorithm proposed by Specht (1991)
GMDH Short Term Forecasting via GMDH-Type Neural Network Algorithms. Group method of data handling (GMDH) - type neural network algorithm is the heuristic self-organization method for modelling the complex systems. In this package, GMDH-type neural network algorithms are applied to make short term forecasting for a univariate time series
elmNNRcpp Training and predict functions for Single Hidden-layer Feedforward Neural Networks (SLFN) using the Extreme Learning Machine (ELM) algorithm. The ELM algorithm differs from the traditional gradient-based algorithms for very short training times (it doesn't need any iterative tuning, this makes learning time very fast) and there is no need to set any other parameters like learning rate, momentum, epochs, etc. This is a reimplementation of the 'elmNN' package using 'RcppArmadillo' after the 'elmNN' package was archived. For more information, see "Extreme learning machine: Theory and applications" by Guang-Bin Huang, Qin-Yu Zhu, Chee-Kheong Siew (2006), Elsevier B.V, doi:10.1016/j.neucom.2005.12.126
brnn Bayesian Regularization for Feed-Forward Neural Networks
AMORE This package was born to release the TAO robust neural network algorithm to the R users. It has grown and I think it can be of interest for the users wanting to implement their own training algorithms as well as for those others whose needs lye only in the "user space"
simpleNeural An Easy to Use Multilayer Perceptron. Trains neural networks (multilayer perceptrons with one hidden layer) for bi- or multi-class classification