Wenbing Li
Wenbing Li
The discrepancy between TRT and ONNXRuntime seems to be irrelevant to the converter itself. It looks the ConvTranspose operator upgrading from opset 1 to opset 11 only clarify some behavior...
To do inference, you need an inference engine, like onnxruntime (https://github.com/microsoft/onnxruntime)
@CsharpIslife , can you try to re-install your package? it seems keras converter was invoked, not xgboost. it looks there is some python environment issue here. /opt/anaconda3/lib/python3.6/site-packages/onnxmltools/convert/main.py in convert_xgboost(*args, **kwargs)...
If there are too many tf.ops in the error message, probably there are some layers were not able to be converted. can you share more details about this model?
The op conversion impl missed in the source code, can I borrow some pieces of your code for the unit testing?
Sorry let you know later that the latest code now support RandomStandardNormal. But these Random Ops actually generate the different results among the different inference runtime, even the seed is...
@buddhapuneeth, is it possible to support axis != 1 case with the existing ONNX operators?
> > Which version of keras2onnx you use? Can you pull the latest master and try? > > I used the latest master of keras2onnx, and the problem come out...
> Can you have some more details on what the memory leaks in the current C++ API are, since the current API was already widely used? A lot of memory...
> #ifdef ORT_API_MANUAL_INIT Can we refine this piece a little bit to let the end user always spend on choosing what the right way of initialization? --- Refers to: include/onnxruntime/core/session/onnxruntime_cxx_api.h:77...