Michele Toni
Michele Toni
Thank you, my use case is like this: Client send image -> Model A TensorRT on Server -> Model B TensorRT on Server -> Custom code C++ on server ->...
Sorry for the question, but I saw that TensorRT Inference Server permits to chain more than one model (actually the feature is in development) and it is possible to add...
For arm32 try this https://github.com/dmlc/mxnet/issues/1914
I can't be sure if it works for you, but I suggest you to update the core mxnet. If you use the original **mxnet_predict-all.cc** you have to create a new...
This is my latest tested version, but it is 1 year's old, so I suggest you to run the amalgamation from scratch. Follow the iOS instructions here https://github.com/dmlc/mxnet/tree/master/amalgamation . Updating...