mmdetection-to-tensorrt
mmdetection-to-tensorrt copied to clipboard
请问转存出来的文件可以用C++调用吗?
请问这里转出来的文件如何用tensorRT的C++接口来调用并加载模型,可行吗and how?
Yes, please read use in c++ for detail.
thanks for your answer, but I cant find the "libamirstan_plugin.so" in this project or "Amirstan_plugin" project, can you tell me where I can get this?
It is in ${AMIRSTAN_PLUGIN_PATH}/build/lib
if you have build the plugin follow the readme.
十分感谢,弄好了,请问我想导入python存的engine的话,是否有api可以直接调用?还是要用c++读取文件?麻烦您了。
Use deepstream support if you want to do stream inference. Or use c++ api like official one. Preprocess and postprocess are not in the engine. Should be implemented by your self.
Use deepstream support if you want to do stream inference. Or use c++ api like official one. Preprocess and postprocess are not in the engine. Should be implemented by your self.
In SSD, does postprocess refer to operations like decode and nms? How should we understand postprocess for model deployment? Operations beside conv, pool, fc?
@ShaneYS The only postprocess you need to do is
bboxes = bboxes/scale_factor
@ShaneYS The only postprocess you need to do is
bboxes = bboxes/scale_factor
谢谢 还是有点问题。 在导出onnx或者TensorRT时,什么类型的操作会被导出,什么类型的操作不会被导出? 为什么在很多pytorch转TensorRT的代码中,都说nms操作无法导出,需要自己实现?
@ShaneYS
Here is the TensorRT supported layers.
Ops like nms does not supported can be implemented as TensorRT Plugin. That's why this project need amirstan_plugin
.
@ShaneYS Here is the TensorRT supported layers. Ops like nms does not supported can be implemented as TensorRT Plugin. That's why this project need
amirstan_plugin
.
Got it, thanks!
It is in
${AMIRSTAN_PLUGIN_PATH}/build/lib
if you have build the plugin follow the readme.
if i cant find the libamirstan_plugin.so in ${AMIRSTAN_PLUGIN_PATH}/build/lib
after cmake-DTENSORRT_DIR=${TENSORRT_DIR} .. and make -j10, what should i do? please.
十分感谢,弄好了,请问我想导入python存的engine的话,是否有api可以直接调用?还是要用c++读取文件?麻烦您了。
请问我cmake 和make后在build/lib找不到那个libamirstan_plugin.so文件怎么办,你可以分享一下那个文件吗