half-precision topic
float16
Stage 3 IEEE 754 half-precision floating-point ponyfill
FP16
Conversion to/from half-precision floating point formats
h5cpp
C++17 templates between [stl::vector | armadillo | eigen3 | ublas | blitz++] and HDF5 datasets
universal
Large collection of number systems providing custom arithmetic and mixed-precision algorithms for AI, Machine Learning, Computer Vision, Signal Processing, CAE, EDA, control, optimization, estimation,...
pytorch-model-parallel
A memory balanced and communication efficient FullyConnected layer with CrossEntropyLoss model parallel implementation in PyTorch
float16
float16 provides IEEE 754 half-precision format (binary16) with correct conversions to/from float32
cublasHgemm-P100
Code for testing the native float16 matrix multiplication performance on Tesla P100 and V100 GPU based on cublasHgemm
chop
Round matrix elements to lower precision in MATLAB
cuda-half2
Convert CUDA programs from float data type to half or half2 with SIMDization