tensorflow-wavenet
tensorflow-wavenet copied to clipboard
Adding GPU automatic mixed precision training
Automatic Mixed Precision training on GPU for TensorFlow has been recently introduced:
https://medium.com/tensorflow/automatic-mixed-precision-in-tensorflow-for-faster-ai-training-on-nvidia-gpus-6033234b2540
Automatic mixed precision training makes use of both FP32 and FP16 precisions where appropriate. FP16 operations can leverage the Tensor cores on NVIDIA GPUs (Volta, Turing or newer architectures) for improved throughput. Mixed precision training also often allows larger batch sizes.
This PR adds GPU automatic mixed precision training to tensorflow-wavenet
via passing the flags value --auto_mixed_precision=True
.
python train.py --data_dir=/path/to/data/ --auto_mixed_precision=True
To learn more about mixed precision and how it works:
Overview of Automatic Mixed Precision for Deep Learning NVIDIA Mixed Precision Training Documentation NVIDIA Deep Learning Performance Guide