Zero Zeng
Zero Zeng
> However, the output of command are random numbers. you are using the zero tensor input so it's expected that the output is meaningless?
Your model output is a tensor produced by softmax, you need further post-processing? 
Also please make sure you do the correct pre-processing on the inputs
你好,原来的python api有bug,你能再试一下嘛? 步骤: ``` cd tiny-tensorrt git pull cd build rm -rf * cmake .. -DBUILD_PYTHON=ON make // after build python3 >> import sys >> sys.path.append("./lib") >> import pytrt >>...
Feel free to file a PR, I can help review it :-) It would be great if we can leverage your work with other people. -> 如果可以提供的话。需要有libopencv来读取图片以及做一些预处理的工作。 I would prefer...
When I increase batch size, the inference time on TensorRT does not change. -> With onnx, TensorRT uses explicit batch, which means if you want to use dynamic batch size,...
set batch size won't work for the onnx model, it's only for caffe and uff.
this is a plugin template, if you want to know some details of the plugin, you can refer to https://github.com/NVIDIA/TensorRT/tree/main/plugin, they are good examples
I think I've got into similar problem, I run this project under ubuntu 16.04 server, no X11, run it within a conda environment, and get the following error ``` $...
hi @guods, it nice to see you again :) for your first question: Each ICudaEngine object is bound to a specific GPU when it is instantiated, either by the builder...