MCB

Results 2 issues of MCB

Hi, I'm trying to convert to tensorRT int8 Model using onnx made by keras2Onnx. My environment is as below: python=3.7, keras2onnx=1.7, tensorflow=2.2.0, onnx=1.7, onnxconverter_common=1.7 My simple test onnx code is...

I'm trying to quantize TF-TRT INT 8 Model in Colab-TF-TRT-inference-from-Keras-saved-model.ipynb using Jupyter notebook. I faced gpu out of memory error. but i think i have enough gpu memory. ~~~ +-----------------------------------------------------------------------------+...