Keras-inference-time-optimizer
Keras-inference-time-optimizer copied to clipboard
How this repo is related to fused BN in Tensorflow?
How this repo is related to fused BN in Tensorflow? is it doing about the same? https://www.tensorflow.org/api_docs/python/tf/nn/fused_batch_norm https://www.tensorflow.org/performance/performance_guide#common_fused_ops
Also maybe from Keras we can export graph to Tensorflow and then freeze it or do even more with TensorRT?
Yes, it's probably the same, but there are not enough details in description.
https://github.com/srihari-humbarwadi/TensorRT-for-keras/blob/master/keras_freeze_model.py https://github.com/srihari-humbarwadi/TensorRT-for-keras/blob/master/optimize_graph.py @mrgloom this is something similar to what you were suggesting
Looks like in coreml tools something similar is also implemented https://github.com/apple/coremltools/blob/da988c683bc466370181e4178b089aa6f07b138f/coremltools/converters/keras/_layers2.py#L502-L522
Yet, another place where similar technique is used https://github.com/tensorflow/tensorflow/tree/9590c4c32dd4346ea5c35673336f5912c6072bf2/tensorflow/tools/graph_transforms#optimizing-for-deployment Here it's called batch norm folding.