posit-neuralnet
posit-neuralnet copied to clipboard
PositNN - Framework for training and inference with neural nets usings posits
PositNN
Framework in C++ for Deep Learning (training and testing) using Posits. The posits are emulated with stillwater-sc/universal library and the deep learning is based in PyTorch.
This is being developed for a thesis to obtain a Master of Science Degree in Aerospace Engineering.
Table of Contents
- Examples
- Installation
- Features
- Usage
- Tests
- Cite
- Contributing
- Team
- Support
- License
Examples
Folder "examples" includes: example of CMakeLists.txt for a project, an example of a FC Neural Network applied to the MNIST dataset, and LeNet-5 applied to the MNIST dataset. To know how to run an example, check the section Tests.
- Example of training LeNet-5 on Fashion MNIST using posits

Installation
Requirements
- Fork from stillwater-sc/universal: gonced8/universal
- PyTorch for C++ (LibTorch)
- PositNN (this library)
- cmake >= v3
- gcc >= v5
- glibc >= v2.17
Setup
- Clone this repository
$ git clone https://github.com/hpc-ulisboa/posit-neuralnet.git
- Clone gonced8/universal repository inside include folder
$ cd posit-neuralnet/include
$ git clone https://github.com/gonced8/universal.git
- Download appropriate PyTorch for C++ (LibTorch) and unzip also inside include folder
https://pytorch.org/get-started/locally/
Features
- Use any posit configuration
- Activation functions: ReLU, Sigmoid, Tanh
- Layers: Batch Normalization, Convolution, Dropout, Linear (Fully-Connected), Pooling (average and max)
- Loss functions: Cross-Entropy, Mean Squared Error
- Optimizer: SGD
- Tensor class: StdTensor
- Parallelization: multithreading with std::thread
Usage
- Copy the CMakeLists.txt inside examples and adapt to your setup, namely, the directories of universal and PositNN, and number of threads
- Build your project
$ mkdir build; cd build
$ cmake .. -DCMAKE_PREFIX_PATH="/path/to/libtorch"
$ make
Tests
To check if everything is installed correctly, you can try one of the examples. The following steps describe how to test the mnist_fcnn example.
- Go to the project folder
$ cd examples/mnist_fcnn
- (Optional) Edit CMakeLists.txt to your configuration. Configure positnn and universal path. The one given assumes that they are both inside the include folder at the repository root directory, that is, from your current path:
../../include
../../include/universal/include
- Build the project. Specify absolute path to PyTorch (LibTorch) folder. This example assumes that the folder is also inside the include folder.
$ mkdir build; cd build
$ cmake .. -DCMAKE_PREFIX_PATH="/home/gonced8/posit-neuralnet/include/libtorch"
$ make
- Run program. If you're saving the models, make sure the appropriate output folder exists.
$ ./train_posit
Cite
@InProceedings{Raposo2021,
author = {Gonçalo Raposo and Pedro Tom{\'{a}}s and Nuno Roma},
booktitle = {{ICASSP} 2021 - 2021 {IEEE} International Conference on Acoustics, Speech and Signal Processing ({ICASSP})},
title = {{PositNN: Training Deep Neural Networks with Mixed Low-Precision Posit}},
year = {2021},
month = {jun},
pages = {7908–7912},
publisher = {{IEEE}},
abstract = {Low-precision formats have proven to be an efficient way to reduce not only the memory footprint but also the hardware resources and power consumption of deep learning computations. Under this premise, the posit numerical format appears to be a highly viable substitute for the IEEE floating-point, but its application to neural networks training still requires further research. Some preliminary results have shown that 8-bit (and even smaller) posits may be used for inference and 16-bit for training, while maintaining the model accuracy. The presented research aims to evaluate the feasibility to train deep convolutional neural networks using posits. For such purpose, a software framework was developed to use simulated posits and quires in end-to-end training and inference. This implementation allows using any bit size, configuration, and even mixed precision, suitable for different precision requirements in various stages. The obtained results suggest that 8-bit posits can substitute 32-bit floats during training with no negative impact on the resulting loss and accuracy.},
doi = {10.1109/ICASSP39728.2021.9413919},
eprint = {2105.00053},
eprintclass = {cs.LG},
eprinttype = {arXiv},
keywords = {posit numerical format, low-precision arithmetic, deep neural networks, training, inference},
url = {https://ieeexplore.ieee.org/document/9413919},
}
Contributing
If you'd like to get involved, e-mail me at [email protected]
Team
Student
- Gonçalo Eduardo Cascalho Raposo - @gonced8
Supervisors
- Prof. Nuno Roma
- Prof. Pedro Tomás - @pedrotomas
Support
Reach out to me at: [email protected]
License
- MIT license
- README.md based from FVCproductions.