FastDeploy
FastDeploy copied to clipboard
OCR 使用tensorrt 推理报错
python infer.py --det_model ./ch_PP-OCRv3_det_infer --cls_model ./ch_ppocr_mobile_v2.0_cls_infer --rec_model ./ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 22.jpg --device gpu --backend trt [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(466)::BuildTrtEngine Start to building TensorRT Engine... [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log 4: [graphShapeAnalyzer.cpp::analyzeShapes::1294] Error Code 4: Miscellaneous (IElementWiseLayer p2o.Add.62: broadcast dimensions must be conformable) [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log 2: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. ) [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(529)::BuildTrtEngine Failed to call buildSerializedNetwork(). [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(636)::CreateTrtEngineFromOnnx Failed to build tensorrt engine. [INFO] fastdeploy/runtime.cc(487)::Init Runtime initialized with Backend::TRT in Device::GPU. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(466)::BuildTrtEngine Start to building TensorRT Engine... [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(552)::BuildTrtEngine TensorRT Engine is built successfully. [INFO] fastdeploy/runtime.cc(487)::Init Runtime initialized with Backend::TRT in Device::GPU. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(466)::BuildTrtEngine Start to building TensorRT Engine... [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(552)::BuildTrtEngine TensorRT Engine is built successfully. [INFO] fastdeploy/runtime.cc(487)::Init Runtime initialized with Backend::TRT in Device::GPU. [WARNING] fastdeploy/backends/tensorrt/utils.cc(40)::Update [New Shape Out of Range] input name: x, shape: [1, 3, 544, 608], The shape range before: min_shape=[1, 3, 48, 10], max_shape=[1, 3, 48, 2304]. [WARNING] fastdeploy/backends/tensorrt/utils.cc(52)::Update [New Shape Out of Range] The updated shape range now: min_shape=[1, 3, 48, 10], max_shape=[1, 3, 544, 2304]. [WARNING] fastdeploy/backends/tensorrt/trt_backend.cc(278)::Infer TensorRT engine will be rebuilt once shape range information changed, this may take lots of time, you can set a proper shape range before loading model to avoid rebuilding process. refer https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/en/faq/tensorrt_tricks.md for more details. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(466)::BuildTrtEngine Start to building TensorRT Engine... [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log 4: [shapeCompiler.cpp::evaluateShapeChecks::911] Error Code 4: Internal Error (kOPT values for profile 0 violate shape constraints: condition '==' violated. 4 != 3. p2o.Add.62: dimensions not compatible for elementwise) [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log 2: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. ) [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(529)::BuildTrtEngine Failed to call buildSerializedNetwork(). [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(369)::SetInputs TRTBackend SetInputs not find name:x 已放弃 (核心已转储)
你的GPU型号是什么呢
python infer.py --det_model ./ch_PP-OCRv3_det_infer --cls_model ./ch_ppocr_mobile_v2.0_cls_infer --rec_model ./ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 22.jpg --device gpu --backend trt [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(466)::BuildTrtEngine Start to building TensorRT Engine... [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log 4: [graphShapeAnalyzer.cpp::analyzeShapes::1294] Error Code 4: Miscellaneous (IElementWiseLayer p2o.Add.62: broadcast dimensions must be conformable) [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log 2: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. ) [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(529)::BuildTrtEngine Failed to call buildSerializedNetwork(). [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(636)::CreateTrtEngineFromOnnx Failed to build tensorrt engine. [INFO] fastdeploy/runtime.cc(487)::Init Runtime initialized with Backend::TRT in Device::GPU. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(466)::BuildTrtEngine Start to building TensorRT Engine... [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(552)::BuildTrtEngine TensorRT Engine is built successfully. [INFO] fastdeploy/runtime.cc(487)::Init Runtime initialized with Backend::TRT in Device::GPU. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(466)::BuildTrtEngine Start to building TensorRT Engine... [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(552)::BuildTrtEngine TensorRT Engine is built successfully. [INFO] fastdeploy/runtime.cc(487)::Init Runtime initialized with Backend::TRT in Device::GPU. [WARNING] fastdeploy/backends/tensorrt/utils.cc(40)::Update [New Shape Out of Range] input name: x, shape: [1, 3, 544, 608], The shape range before: min_shape=[1, 3, 48, 10], max_shape=[1, 3, 48, 2304]. [WARNING] fastdeploy/backends/tensorrt/utils.cc(52)::Update [New Shape Out of Range] The updated shape range now: min_shape=[1, 3, 48, 10], max_shape=[1, 3, 544, 2304]. [WARNING] fastdeploy/backends/tensorrt/trt_backend.cc(278)::Infer TensorRT engine will be rebuilt once shape range information changed, this may take lots of time, you can set a proper shape range before loading model to avoid rebuilding process. refer https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/en/faq/tensorrt_tricks.md for more details. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(466)::BuildTrtEngine Start to building TensorRT Engine... [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log 4: [shapeCompiler.cpp::evaluateShapeChecks::911] Error Code 4: Internal Error (kOPT values for profile 0 violate shape constraints: condition '==' violated. 4 != 3. p2o.Add.62: dimensions not compatible for elementwise) [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log 2: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. ) [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(529)::BuildTrtEngine Failed to call buildSerializedNetwork(). [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(369)::SetInputs TRTBackend SetInputs not find name:x 已放弃 (核心已转储)
Hi,你好,
1.跑的时候除了换了一张图片,其他并无改动是吗?(是的话,可否分享一下这个图片复现一下) 2.TensorRT的版本用的是多少呢? 3.有尝试过多次运行这个infer.py脚本吗,是否每次都出现这个错误?
1,是的就是换了一张图片,其他的没有改动 2,GPU 是3090 ,cuda 11.6, tensorrt 8.4.1.5 3,多次尝试 CPU, GPU 都可以正常识别,就是使用tensorrt 报这个错误
我是直接 通过pip 安装的 没有通过编译安装,是这个问题吗
我是直接 通过pip 安装的 没有通过编译安装,是这个问题吗
Hi, 可以试试单独把det模型关闭trt, cls模型和rec模型开启trt试一试. 这个问题我后续会排查
直接加载 tensorrt 转换好的模型 报错找不到输出层
python infer.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 22.jpg --device gpu --backend trt [INFO] fastdeploy/runtime.cc(513)::Init Runtime initialized with Backend::PDINFER in Device::GPU. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(623)::CreateTrtEngineFromOnnx Detect serialized TensorRT Engine file in ch_PP-OCRv3_rec_infer/rec_trt_cache.trt, will load it directly. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(108)::LoadTrtCache Build TensorRT Engine from cache file: ch_PP-OCRv3_rec_infer/rec_trt_cache.trt with shape range information as below, [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(111)::LoadTrtCache Input name: x, shape=[-1, 3, -1, -1], min=[1, 3, 48, 10], max=[1, 3, 48, 2304]
[INFO] fastdeploy/runtime.cc(502)::Init Runtime initialized with Backend::TRT in Device::GPU. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(623)::CreateTrtEngineFromOnnx Detect serialized TensorRT Engine file in ch_PP-OCRv3_rec_infer/rec_trt_cache.trt, will load it directly. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(108)::LoadTrtCache Build TensorRT Engine from cache file: ch_PP-OCRv3_rec_infer/rec_trt_cache.trt with shape range information as below, [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(111)::LoadTrtCache Input name: x, shape=[-1, 3, 48, -1], min=[1, 3, 48, 10], max=[1, 3, 48, 2304]
[INFO] fastdeploy/runtime.cc(502)::Init Runtime initialized with Backend::TRT in Device::GPU. [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(442)::AllocateOutputsBuffer Cannot find output: softmax_0.tmp_0 of tensorrt network from the original model.
softmax_0.tmp_0
1.这个op就在分类模型的结尾处呀..很奇怪, 你的模型都是从我们提供的链接下载的吗? 2.是否尝试过我上一条说的那种办法?把det的trt给关掉,可以看看后两个模型是否可行. 3.我看你的上一条log有些奇怪,读了两次rec模型的trt cahce, 报的op找不到是属于分类模型 4.是否方便留个联系方式,方便解决你的问题
- 都是你们提供的模型
- 尝试了 是可以的,我打开了保存tensorrt的模型 rec_option.set_trt_cache_file(args.rec_model + "/rec_trt_cache.trt") 再次运行的时候 就报上面的错误
- 我的微信 18845426042
我遇到了同样的问题。 环境:3090,cuda11.2, fastdeploy-gpu-python 0.7.0 python3 infer.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device gpu --backend trt [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(479)::BuildTrtEngine Start to building TensorRT Engine... [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log 4: [graphShapeAnalyzer.cpp::analyzeShapes::1294] Error Code 4: Miscellaneous (IElementWiseLayer p2o.Add.84: broadcast dimensions must be conformable) [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log 2: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. ) [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(542)::BuildTrtEngine Failed to call buildSerializedNetwork(). [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(655)::CreateTrtEngineFromOnnx Failed to build tensorrt engine. [INFO] fastdeploy/runtime.cc(502)::Init Runtime initialized with Backend::TRT in Device::GPU. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(479)::BuildTrtEngine Start to building TensorRT Engine... [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(565)::BuildTrtEngine TensorRT Engine is built successfully. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(567)::BuildTrtEngine Serialize TensorRTEngine to local file ch_PP-OCRv3_rec_infer/rec_trt_cache.trt. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(577)::BuildTrtEngine TensorRTEngine is serialized to local file ch_PP-OCRv3_rec_infer/rec_trt_cache.trt, we can load this model from the seralized engine directly next time. [INFO] fastdeploy/runtime.cc(502)::Init Runtime initialized with Backend::TRT in Device::GPU. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(623)::CreateTrtEngineFromOnnx Detect serialized TensorRT Engine file in ch_PP-OCRv3_rec_infer/rec_trt_cache.trt, will load it directly. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(108)::LoadTrtCache Build TensorRT Engine from cache file: ch_PP-OCRv3_rec_infer/rec_trt_cache.trt with shape range information as below, [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(111)::LoadTrtCache Input name: x, shape=[-1, 3, 48, -1], min=[1, 3, 48, 10], max=[1, 3, 48, 2304]
[INFO] fastdeploy/runtime.cc(502)::Init Runtime initialized with Backend::TRT in Device::GPU. [WARNING] fastdeploy/backends/tensorrt/utils.cc(40)::Update [New Shape Out of Range] input name: x, shape: [1, 3, 960, 608], The shape range before: min_shape=[1, 3, 48, 10], max_shape=[1, 3, 48, 2304]. [WARNING] fastdeploy/backends/tensorrt/utils.cc(52)::Update [New Shape Out of Range] The updated shape range now: min_shape=[1, 3, 48, 10], max_shape=[1, 3, 960, 2304]. [WARNING] fastdeploy/backends/tensorrt/trt_backend.cc(291)::Infer TensorRT engine will be rebuilt once shape range information changed, this may take lots of time, you can set a proper shape range before loading model to avoid rebuilding process. refer https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/en/faq/tensorrt_tricks.md for more details. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(479)::BuildTrtEngine Start to building TensorRT Engine... [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log 4: [shapeCompiler.cpp::evaluateShapeChecks::911] Error Code 4: Internal Error (kOPT values for profile 0 violate shape constraints: condition '==' violated. 4 != 3. p2o.Add.84: dimensions not compatible for elementwise) [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log 2: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. ) [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(542)::BuildTrtEngine Failed to call buildSerializedNetwork(). [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(382)::SetInputs TRTBackend SetInputs not find name:x Aborted (core dumped)
我遇到了同样的问题。 环境:3090,cuda11.2, fastdeploy-gpu-python 0.7.0 python3 infer.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device gpu --backend trt [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(479)::BuildTrtEngine Start to building TensorRT Engine... [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log 4: [graphShapeAnalyzer.cpp::analyzeShapes::1294] Error Code 4: Miscellaneous (IElementWiseLayer p2o.Add.84: broadcast dimensions must be conformable) [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log 2: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. ) [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(542)::BuildTrtEngine Failed to call buildSerializedNetwork(). [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(655)::CreateTrtEngineFromOnnx Failed to build tensorrt engine. [INFO] fastdeploy/runtime.cc(502)::Init Runtime initialized with Backend::TRT in Device::GPU. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(479)::BuildTrtEngine Start to building TensorRT Engine... [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(565)::BuildTrtEngine TensorRT Engine is built successfully. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(567)::BuildTrtEngine Serialize TensorRTEngine to local file ch_PP-OCRv3_rec_infer/rec_trt_cache.trt. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(577)::BuildTrtEngine TensorRTEngine is serialized to local file ch_PP-OCRv3_rec_infer/rec_trt_cache.trt, we can load this model from the seralized engine directly next time. [INFO] fastdeploy/runtime.cc(502)::Init Runtime initialized with Backend::TRT in Device::GPU. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(623)::CreateTrtEngineFromOnnx Detect serialized TensorRT Engine file in ch_PP-OCRv3_rec_infer/rec_trt_cache.trt, will load it directly. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(108)::LoadTrtCache Build TensorRT Engine from cache file: ch_PP-OCRv3_rec_infer/rec_trt_cache.trt with shape range information as below, [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(111)::LoadTrtCache Input name: x, shape=[-1, 3, 48, -1], min=[1, 3, 48, 10], max=[1, 3, 48, 2304]
[INFO] fastdeploy/runtime.cc(502)::Init Runtime initialized with Backend::TRT in Device::GPU. [WARNING] fastdeploy/backends/tensorrt/utils.cc(40)::Update [New Shape Out of Range] input name: x, shape: [1, 3, 960, 608], The shape range before: min_shape=[1, 3, 48, 10], max_shape=[1, 3, 48, 2304]. [WARNING] fastdeploy/backends/tensorrt/utils.cc(52)::Update [New Shape Out of Range] The updated shape range now: min_shape=[1, 3, 48, 10], max_shape=[1, 3, 960, 2304]. [WARNING] fastdeploy/backends/tensorrt/trt_backend.cc(291)::Infer TensorRT engine will be rebuilt once shape range information changed, this may take lots of time, you can set a proper shape range before loading model to avoid rebuilding process. refer https://github.com/PaddlePaddle/FastDeploy/blob/develop/docs/en/faq/tensorrt_tricks.md for more details. [INFO] fastdeploy/backends/tensorrt/trt_backend.cc(479)::BuildTrtEngine Start to building TensorRT Engine... [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log 4: [shapeCompiler.cpp::evaluateShapeChecks::911] Error Code 4: Internal Error (kOPT values for profile 0 violate shape constraints: condition '==' violated. 4 != 3. p2o.Add.84: dimensions not compatible for elementwise) [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(238)::log 2: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. ) [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(542)::BuildTrtEngine Failed to call buildSerializedNetwork(). [ERROR] fastdeploy/backends/tensorrt/trt_backend.cc(382)::SetInputs TRTBackend SetInputs not find name:x Aborted (core dumped)
已经修复,请参考: https://github.com/PaddlePaddle/FastDeploy/blob/develop/examples/vision/ocr/PP-OCRv3/python/infer.py
试了一下,成功了,谢谢!
GPU: Jetson NVIDIA Tegra Xavier NX
Ubuntu 20.04
python 3.8
jetpack 5.1
Compiled .whl file from code
using the same infer.py and same model file as in fast-deploy github. I just changed the picture.
When Generating rec_model trt file
Error:
python infer.py --det_model ch_PP-OCRv3_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv3_rec_infer --rec_label_file ppocr_keys_v1.txt --image 1.jpg --device gpu --backend trt
WARNING:root:RuntimeOption.set_trt_input_shape
will be deprecated in v1.2.0, please use RuntimeOption.trt_option.set_shape()
instead.
WARNING:root:RuntimeOption.set_trt_input_shape
will be deprecated in v1.2.0, please use RuntimeOption.trt_option.set_shape()
instead.
WARNING:root:RuntimeOption.set_trt_input_shape
will be deprecated in v1.2.0, please use RuntimeOption.trt_option.set_shape()
instead.
WARNING:root:RuntimeOption.set_trt_cache_file
will be deprecated in v1.2.0, please use RuntimeOption.trt_option.serialize_file = ch_PP-OCRv3_det_infer/det_trt_cache.trt
instead.
WARNING:root:RuntimeOption.set_trt_cache_file
will be deprecated in v1.2.0, please use RuntimeOption.trt_option.serialize_file = ch_ppocr_mobile_v2.0_cls_infer/cls_trt_cache.trt
instead.
WARNING:root:RuntimeOption.set_trt_cache_file
will be deprecated in v1.2.0, please use RuntimeOption.trt_option.serialize_file = ch_PP-OCRv3_rec_infer/rec_trt_cache.trt
instead.
[INFO] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(702)::CreateTrtEngineFromOnnx Detect serialized TensorRT Engine file in ch_PP-OCRv3_det_infer/det_trt_cache.trt, will load it directly.
[INFO] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(108)::LoadTrtCache Build TensorRT Engine from cache file: ch_PP-OCRv3_det_infer/det_trt_cache.trt with shape range information as below,
[INFO] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(111)::LoadTrtCache Input name: x, shape=[-1, 3, -1, -1], min=[1, 3, 64, 64], max=[1, 3, 960, 960]
[INFO] fastdeploy/runtime/runtime.cc(306)::CreateTrtBackend Runtime initialized with Backend::TRT in Device::GPU. [INFO] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(702)::CreateTrtEngineFromOnnx Detect serialized TensorRT Engine file in ch_ppocr_mobile_v2.0_cls_infer/cls_trt_cache.trt, will load it directly. [INFO] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(108)::LoadTrtCache Build TensorRT Engine from cache file: ch_ppocr_mobile_v2.0_cls_infer/cls_trt_cache.trt with shape range information as below, [INFO] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(111)::LoadTrtCache Input name: x, shape=[-1, 3, -1, -1], min=[1, 3, 48, 10], max=[1, 3, 48, 1024]
[INFO] fastdeploy/runtime/runtime.cc(306)::CreateTrtBackend Runtime initialized with Backend::TRT in Device::GPU. [INFO] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(556)::BuildTrtEngineStart to building TensorRT Engine... [ERROR] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(239)::log 4: [optimizer.cpp::computeCosts::3725] Error Code 4: Internal Error (Could not find any implementation for node p2o.Softmax.2 due to insufficient workspace. See verbose log for requested sizes.) [ERROR] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(239)::log 2: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. ) [ERROR] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(619)::BuildTrtEngine Failed to call buildSerializedNetwork(). [ERROR] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(735)::CreateTrtEngineFromOnnx Failed to build tensorrt engine. [INFO] fastdeploy/runtime/runtime.cc(306)::CreateTrtBackend Runtime initialized with Backend::TRT in Device::GPU. [ERROR] fastdeploy/runtime/backends/tensorrt/trt_backend.cc(446)::SetInputs TRTBackend SetInputs not find name:x Aborted (core dumped) Thanks a lot!
I tried again. It worked, the problem solved. Thanks a lot.
@jiangjiajun building TensorRT Engine 会消耗特别长时间,怎么优化呢