nntrainer
nntrainer copied to clipboard
Support Convolution&Batchnorm Fusing for Optimized Inference Mode
One of the ways to accelerate running the NNTrainer in inference mode is to fuse operations. We are currently using this fusion when exporting to TensorFlow Lite. By applying it to the current NNTrainer, we can improve the speed during inference.
Many Deeplearning Model using Batchnorm after Conv layer
and when inference we can fusing ops to below