tensorrt
tensorrt copied to clipboard
TensorRT saved model too large to use with TFServing
Versions: Tensorflow- 2.3.0-rc1 CUDA-10 TensorRT-6 I am trying to convert a GPT2 model, the saved model size is about 1.9GB. It causes an issue when I try to use TF serving for deployment as it hits a protobuf limit of 1 GB. I have tried to not build TRT engines before deployement too, but it did not affect the size of the saved_model.pb.
CC @bixia1
Any updates on this? @sanjoy @bixia1
I met the same problem on tf-2.4.1
I also met this problem