gcn
gcn copied to clipboard
Adding GPU automatic mixed precision training support
Automatic Mixed Precision training on GPU for Tensorflow has been recently introduced:
https://medium.com/tensorflow/automatic-mixed-precision-in-tensorflow-for-faster-ai-training-on-nvidia-gpus-6033234b2540
Automatic mixed precision training makes use of both FP32 and FP16 precisions where appropriate. FP16 operations can leverage the Tensor cores on NVIDIA GPUs (Volta, Turing or newer architectures) for much improved throughput.
This PR adds GPU automatic mixed precision training to gcn
training task either via setting an OS flag TF_ENABLE_AUTO_MIXED_PRECISION=1
or passing the flags value --gpu_auto_mixed_precision=True
.
python train.py --dataset cora --gpu_auto_mixed_precision=True
How mixed precision works
Mixed precision is the use of both float16 and float32 data types when training a model.
Performing arithmetic operations in float16 takes advantage of the performance gains of using specialized processing units such as the Tensor cores on NVIDIA GPUs. Due to the smaller representable range of float16, performing the entire training with float16 data type can result in underflow of the gradients, leading to convergence or model quality issues.
However, performing only select arithmetic operations in float16 results in performance gains when using compatible hardware accelerators, decreasing training time and reducing memory usage, typically without sacrificing model performance.
To learn more about mixed precision and how it works:
Overview of Automatic Mixed Precision for Deep Learning NVIDIA Mixed Precision Training Documentation NVIDIA Deep Learning Performance Guide