float16 topic
float16
Stage 3 IEEE 754 half-precision floating-point ponyfill
half-rs
Half-precision floating point types f16 and bf16 for Rust.
OpenSeq2Seq
Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
float16
float16 provides IEEE 754 half-precision format (binary16) with correct conversions to/from float32
cublasHgemm-P100
Code for testing the native float16 matrix multiplication performance on Tesla P100 and V100 GPU based on cublasHgemm
GradientAccumulator
:dart: Accumulated Gradients for TensorFlow 2
SimSIMD
Up to 200x Faster Inner Products and Vector Similarity — for Python, JavaScript, Rust, C, and Swift, supporting f64, f32, f16 real & complex, i8, and binary vectors using SIMD for both x86 AVX2 & AVX-...
float16_t
CPP20 implementation of a 16-bit floating-point type mimicking most of the IEEE 754 behavior. Single file and header-only.
torch-cublas-hgemm
PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu