PaddleOCR
PaddleOCR copied to clipboard
SVTR识别模型使用tensorrt推理没有输出
paddle inference 预测库安装的是这个:https://paddle-inference-lib.bj.bcebos.com/2.3.2/python/Linux/GPU/x86-64_gcc8.2_avx_mkl_cuda10.2_cudnn8.1.1_trt7.2.3.4/paddlepaddle_gpu-2.3.2-cp37-cp37m-linux_x86_64.whl
环境都和上面的对上的,但是在使用tensorrt推理时,检测模型没问题,识别模型几乎就没有输出了,识别模型不适用tensorrt推理是正常的,
识别模型我在 tools/infer/utility.py 里面加了很多shape相关的设置,识别部分的代码如下:
elif mode == "rec": print(args.rec_algorithm) if args.rec_algorithm not in ["CRNN", "SVTR_LCNet"]: use_dynamic_shape = False imgH = int(args.rec_image_shape.split(',')[-2]) min_trans_shape = { "bilinear_interp_v2_0.tmp_0":[1, 3, 32, 64], "transpose_1.tmp_0_slice_2": [1, 6, 480, 32], "transpose_4.tmp_0_slice_2": [1, 6, 480, 32], "transpose_7.tmp_0_slice_2": [1, 6, 480, 32], "transpose_12.tmp_0_slice_2": [1, 8, 240, 32], "transpose_15.tmp_0_slice_2": [1, 8, 240, 32], "transpose_18.tmp_0_slice_2": [1, 8, 240, 32], "transpose_21.tmp_0_slice_2": [1, 8, 240, 32], "transpose_24.tmp_0_slice_2": [1, 8, 240, 32], "transpose_27.tmp_0_slice_2": [1, 8, 240, 32], "transpose_30.tmp_0_slice_2": [1, 8, 240, 32], "transpose_33.tmp_0_slice_2": [1, 8, 240, 32], "layer_norm_51.tmp_2": [1, 240, 256], "tmp_3": [1, 480, 192], "tmp_5": [1, 6, 480, 480], "tmp_7": [1, 480, 192], "tmp_9": [1, 6, 480, 480], "tmp_11": [1, 480, 192], "tmp_13": [1, 6, 480, 480], "tmp_15": [1, 480, 192], "tmp_17": [1, 8, 240, 240], "tmp_19": [1, 240, 256], "tmp_21": [1, 8, 240, 240], "tmp_23": [1, 240, 256], "tmp_25": [1, 8, 240, 240], "tmp_27": [1, 240, 256], "tmp_29": [1, 8, 240, 240], "tmp_31": [1, 240, 256], "tmp_33": [1, 8, 240, 240], "tmp_35": [1, 240, 256], "tmp_37": [1, 8, 240, 240], "tmp_39": [1, 240, 256], "tmp_41": [1, 8, 240, 240], "tmp_43": [1, 240, 256], } max_trans_shape = { "bilinear_interp_v2_0.tmp_0":[args.rec_batch_num, 3, 32, 64], "transpose_1.tmp_0_slice_2": [args.rec_batch_num, 6, 480, 32], "transpose_4.tmp_0_slice_2": [args.rec_batch_num, 6, 480, 32], "transpose_7.tmp_0_slice_2": [args.rec_batch_num, 6, 480, 32], "transpose_12.tmp_0_slice_2": [args.rec_batch_num, 8, 240, 32], "transpose_15.tmp_0_slice_2": [args.rec_batch_num, 8, 240, 32], "transpose_18.tmp_0_slice_2": [args.rec_batch_num, 8, 240, 32], "transpose_21.tmp_0_slice_2": [args.rec_batch_num, 8, 240, 32], "transpose_24.tmp_0_slice_2": [args.rec_batch_num, 8, 240, 32], "transpose_27.tmp_0_slice_2": [args.rec_batch_num, 8, 240, 32], "transpose_30.tmp_0_slice_2": [args.rec_batch_num, 8, 240, 32], "transpose_33.tmp_0_slice_2": [args.rec_batch_num, 8, 240, 32], "layer_norm_51.tmp_2": [args.rec_batch_num, 240, 256], "tmp_3": [args.rec_batch_num, 480, 192], "tmp_5": [args.rec_batch_num, 6, 480, 480], "tmp_7": [args.rec_batch_num, 480, 192], "tmp_9": [args.rec_batch_num, 6, 480, 480], "tmp_11": [args.rec_batch_num, 480, 192], "tmp_13": [args.rec_batch_num, 6, 480, 480], "tmp_15": [args.rec_batch_num, 480, 192], "tmp_17": [args.rec_batch_num, 8, 240, 240], "tmp_19": [args.rec_batch_num, 240, 256], "tmp_21": [args.rec_batch_num, 8, 240, 240], "tmp_23": [args.rec_batch_num, 240, 256], "tmp_25": [args.rec_batch_num, 8, 240, 240], "tmp_27": [args.rec_batch_num, 240, 256], "tmp_29": [args.rec_batch_num, 8, 240, 240], "tmp_31": [args.rec_batch_num, 240, 256], "tmp_33": [args.rec_batch_num, 8, 240, 240], "tmp_35": [args.rec_batch_num, 240, 256], "tmp_37": [args.rec_batch_num, 8, 240, 240], "tmp_39": [args.rec_batch_num, 240, 256], "tmp_41": [args.rec_batch_num, 8, 240, 240], "tmp_43": [args.rec_batch_num, 240, 256], } opt_trans_shape = { "bilinear_interp_v2_0.tmp_0":[args.rec_batch_num, 3, 32, 64], "transpose_1.tmp_0_slice_2": [args.rec_batch_num, 6, 480, 32], "transpose_4.tmp_0_slice_2": [args.rec_batch_num, 6, 480, 32], "transpose_7.tmp_0_slice_2": [args.rec_batch_num, 6, 480, 32], "transpose_12.tmp_0_slice_2": [args.rec_batch_num, 8, 240, 32], "transpose_15.tmp_0_slice_2": [args.rec_batch_num, 8, 240, 32], "transpose_18.tmp_0_slice_2": [args.rec_batch_num, 8, 240, 32], "transpose_21.tmp_0_slice_2": [args.rec_batch_num, 8, 240, 32], "transpose_24.tmp_0_slice_2": [args.rec_batch_num, 8, 240, 32], "transpose_27.tmp_0_slice_2": [args.rec_batch_num, 8, 240, 32], "transpose_30.tmp_0_slice_2": [args.rec_batch_num, 8, 240, 32], "transpose_33.tmp_0_slice_2": [args.rec_batch_num, 8, 240, 32], "layer_norm_51.tmp_2": [args.rec_batch_num, 240, 256], "tmp_3": [args.rec_batch_num, 480, 192], "tmp_5": [args.rec_batch_num, 6, 480, 480], "tmp_7": [args.rec_batch_num, 480, 192], "tmp_9": [args.rec_batch_num, 6, 480, 480], "tmp_11": [args.rec_batch_num, 480, 192], "tmp_13": [args.rec_batch_num, 6, 480, 480], "tmp_15": [args.rec_batch_num, 480, 192], "tmp_17": [args.rec_batch_num, 8, 240, 240], "tmp_19": [args.rec_batch_num, 240, 256], "tmp_21": [args.rec_batch_num, 8, 240, 240], "tmp_23": [args.rec_batch_num, 240, 256], "tmp_25": [args.rec_batch_num, 8, 240, 240], "tmp_27": [args.rec_batch_num, 240, 256], "tmp_29": [args.rec_batch_num, 8, 240, 240], "tmp_31": [args.rec_batch_num, 240, 256], "tmp_33": [args.rec_batch_num, 8, 240, 240], "tmp_35": [args.rec_batch_num, 240, 256], "tmp_37": [args.rec_batch_num, 8, 240, 240], "tmp_39": [args.rec_batch_num, 240, 256], "tmp_41": [args.rec_batch_num, 8, 240, 240], "tmp_43": [args.rec_batch_num, 240, 256], } min_input_shape = {"x": [1, 3, imgH, 10]} max_input_shape = {"x": [args.rec_batch_num, 3, imgH, 4000]} opt_input_shape = {"x": [args.rec_batch_num, 3, imgH, 320]} min_input_shape.update(min_trans_shape) max_input_shape.update(max_trans_shape) opt_input_shape.update(opt_trans_shape)
下面是我的运行命令: python tools/infer/predict_rec.py --image_dir="./OCR_dataset_test/crop_img/" --rec_model_dir="./inference/rec_svtr_large_stn_en/" --rec_char_dict_path="./ppocr/utils/tgb_dict.txt" --use_tensorrt=True
下面就是运行命令后的log输出(一些可能导致问题的地方我加粗了,但是不知道如何解决):
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script SVTR_LCNet W1103 12:30:03.433691 47754 analysis_predictor.cc:1118] The one-time configuration of analysis predictor failed, which may be due to native predictor called first and its configurations taken effect. I1103 12:30:03.491290 47754 analysis_predictor.cc:881] TensorRT subgraph engine is enabled --- Running analysis [ir_graph_build_pass] --- Running analysis [ir_graph_clean_pass] --- Running analysis [ir_analysis_pass] --- Running IR pass [adaptive_pool2d_convert_global_pass] --- Running IR pass [shuffle_channel_detect_pass] --- Running IR pass [quant_conv2d_dequant_fuse_pass] --- Running IR pass [delete_quant_dequant_op_pass] --- Running IR pass [delete_quant_dequant_filter_op_pass] --- Running IR pass [delete_weight_dequant_linear_op_pass] --- Running IR pass [delete_quant_dequant_linear_op_pass] --- Running IR pass [add_support_int8_pass] I1103 12:30:04.873852 47754 fuse_pass_base.cc:57] --- detected 628 subgraphs --- Running IR pass [simplify_with_basic_ops_pass] --- Running IR pass [embedding_eltwise_layernorm_fuse_pass] --- Running IR pass [preln_embedding_eltwise_layernorm_fuse_pass] --- Running IR pass [multihead_matmul_fuse_pass_v2] --- Running IR pass [multihead_matmul_fuse_pass_v3] --- Running IR pass [skip_layernorm_fuse_pass] I1103 12:30:05.180601 47754 fuse_pass_base.cc:57] --- detected 1 subgraphs --- Running IR pass [preln_skip_layernorm_fuse_pass] --- Running IR pass [conv_bn_fuse_pass] --- Running IR pass [unsqueeze2_eltwise_fuse_pass] --- Running IR pass [trt_squeeze2_matmul_fuse_pass] --- Running IR pass [trt_reshape2_matmul_fuse_pass] --- Running IR pass [trt_flatten2_matmul_fuse_pass] --- Running IR pass [trt_map_matmul_v2_to_mul_pass] I1103 12:30:05.228595 47754 fuse_pass_base.cc:57] --- detected 87 subgraphs --- Running IR pass [trt_map_matmul_v2_to_matmul_pass] W1103 12:30:05.234623 47754 trt_map_matmul_to_mul_pass.cc:449] matmul op not support broadcast, please check inputs'shape. W1103 12:30:05.234683 47754 trt_map_matmul_to_mul_pass.cc:449] matmul op not support broadcast, please check inputs'shape. I1103 12:30:05.238886 47754 fuse_pass_base.cc:57] --- detected 42 subgraphs --- Running IR pass [trt_map_matmul_to_mul_pass] --- Running IR pass [fc_fuse_pass] I1103 12:30:05.342517 47754 fuse_pass_base.cc:57] --- detected 87 subgraphs --- Running IR pass [conv_elementwise_add_fuse_pass] I1103 12:30:05.350124 47754 fuse_pass_base.cc:57] --- detected 10 subgraphs --- Running IR pass [tensorrt_subgraph_pass] I1103 12:30:05.426345 47754 tensorrt_subgraph_pass.cc:145] --- detect a sub-graph with 21 nodes W1103 12:30:05.427937 47754 tensorrt_subgraph_pass.cc:374] The Paddle Inference library is compiled with 7.2 version TensorRT, but the runtime TensorRT you are using is 7 version. This might cause serious compatibility issues. We strongly recommend using the same TRT version at runtime. I1103 12:30:05.429605 47754 tensorrt_subgraph_pass.cc:433] Prepare TRT engine (Optimize model structure, Select OP kernel etc). This process may cost a lot of time. W1103 12:30:05.722949 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:05.725296 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:05.727317 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:05.728916 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. I1103 12:30:05.730120 47754 engine.cc:222] Run Paddle-TRT Dynamic Shape mode. I1103 12:30:08.385004 47754 engine.cc:471] Inspector needs TensorRT version 8.2 and after. I1103 12:30:08.387125 47754 tensorrt_subgraph_pass.cc:145] --- detect a sub-graph with 21 nodes I1103 12:30:08.391036 47754 tensorrt_subgraph_pass.cc:433] Prepare TRT engine (Optimize model structure, Select OP kernel etc). This process may cost a lot of time. W1103 12:30:08.394107 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:08.396483 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:08.398896 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:08.400789 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. I1103 12:30:08.402479 47754 engine.cc:222] Run Paddle-TRT Dynamic Shape mode. I1103 12:30:09.754241 47754 engine.cc:471] Inspector needs TensorRT version 8.2 and after. I1103 12:30:09.755785 47754 tensorrt_subgraph_pass.cc:145] --- detect a sub-graph with 21 nodes I1103 12:30:09.758739 47754 tensorrt_subgraph_pass.cc:433] Prepare TRT engine (Optimize model structure, Select OP kernel etc). This process may cost a lot of time. W1103 12:30:09.761428 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:09.763217 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:09.765049 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:09.766525 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. I1103 12:30:09.767992 47754 engine.cc:222] Run Paddle-TRT Dynamic Shape mode. I1103 12:30:11.142541 47754 engine.cc:471] Inspector needs TensorRT version 8.2 and after. I1103 12:30:11.144170 47754 tensorrt_subgraph_pass.cc:145] --- detect a sub-graph with 21 nodes I1103 12:30:11.147404 47754 tensorrt_subgraph_pass.cc:433] Prepare TRT engine (Optimize model structure, Select OP kernel etc). This process may cost a lot of time. W1103 12:30:11.150353 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:11.152395 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:11.154479 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:11.156186 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. I1103 12:30:11.157847 47754 engine.cc:222] Run Paddle-TRT Dynamic Shape mode. I1103 12:30:12.512043 47754 engine.cc:471] Inspector needs TensorRT version 8.2 and after. I1103 12:30:12.516086 47754 tensorrt_subgraph_pass.cc:145] --- detect a sub-graph with 21 nodes I1103 12:30:12.520063 47754 tensorrt_subgraph_pass.cc:433] Prepare TRT engine (Optimize model structure, Select OP kernel etc). This process may cost a lot of time. W1103 12:30:12.523222 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:12.525698 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:12.528225 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:12.530254 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. I1103 12:30:12.532035 47754 engine.cc:222] Run Paddle-TRT Dynamic Shape mode. I1103 12:30:13.881476 47754 engine.cc:471] Inspector needs TensorRT version 8.2 and after. I1103 12:30:13.884037 47754 tensorrt_subgraph_pass.cc:145] --- detect a sub-graph with 258 nodes I1103 12:30:13.921041 47754 tensorrt_subgraph_pass.cc:433] Prepare TRT engine (Optimize model structure, Select OP kernel etc). This process may cost a lot of time. W1103 12:30:13.936054 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:13.939080 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:13.941498 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:13.943415 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:13.946445 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:13.950340 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:13.953677 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:13.956337 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:13.959961 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:13.963593 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:13.966897 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:13.987998 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:13.994149 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.010855 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.028527 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.039934 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.046278 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.061324 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.077612 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.088832 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.095284 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.110419 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.125854 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.137552 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.144708 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.165520 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.181077 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.192620 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.200326 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.219897 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.241829 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.261054 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.276751 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.297508 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.317664 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.332067 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.344483 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.362623 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.382155 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.395315 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.406221 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.422930 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.439980 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.452378 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.463282 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.483696 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.503957 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:14.509250 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. I1103 12:30:14.511045 47754 engine.cc:222] Run Paddle-TRT Dynamic Shape mode. I1103 12:30:32.014195 47754 engine.cc:471] Inspector needs TensorRT version 8.2 and after. I1103 12:30:32.040998 47754 tensorrt_subgraph_pass.cc:145] --- detect a sub-graph with 21 nodes I1103 12:30:32.046265 47754 tensorrt_subgraph_pass.cc:433] Prepare TRT engine (Optimize model structure, Select OP kernel etc). This process may cost a lot of time. W1103 12:30:32.048522 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:32.049463 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:32.050442 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:32.051246 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. I1103 12:30:32.052457 47754 engine.cc:222] Run Paddle-TRT Dynamic Shape mode. I1103 12:30:33.205503 47754 engine.cc:471] Inspector needs TensorRT version 8.2 and after. I1103 12:30:33.207989 47754 tensorrt_subgraph_pass.cc:145] --- detect a sub-graph with 23 nodes I1103 12:30:33.214890 47754 tensorrt_subgraph_pass.cc:433] Prepare TRT engine (Optimize model structure, Select OP kernel etc). This process may cost a lot of time. I1103 12:30:33.227324 47754 engine.cc:222] Run Paddle-TRT Dynamic Shape mode. I1103 12:30:34.010567 47754 engine.cc:471] Inspector needs TensorRT version 8.2 and after. I1103 12:30:34.013986 47754 tensorrt_subgraph_pass.cc:145] --- detect a sub-graph with 27 nodes I1103 12:30:34.019558 47754 tensorrt_subgraph_pass.cc:433] Prepare TRT engine (Optimize model structure, Select OP kernel etc). This process may cost a lot of time. W1103 12:30:34.022787 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:34.024003 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:34.025353 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:34.029120 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. I1103 12:30:34.031004 47754 engine.cc:222] Run Paddle-TRT Dynamic Shape mode. I1103 12:30:35.262979 47754 engine.cc:471] Inspector needs TensorRT version 8.2 and after. I1103 12:30:35.265262 47754 tensorrt_subgraph_pass.cc:145] --- detect a sub-graph with 21 nodes I1103 12:30:35.269379 47754 tensorrt_subgraph_pass.cc:433] Prepare TRT engine (Optimize model structure, Select OP kernel etc). This process may cost a lot of time. W1103 12:30:35.272326 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:35.274672 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:35.277025 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:35.278877 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. I1103 12:30:35.280537 47754 engine.cc:222] Run Paddle-TRT Dynamic Shape mode. I1103 12:30:36.630254 47754 engine.cc:471] Inspector needs TensorRT version 8.2 and after. I1103 12:30:36.632066 47754 tensorrt_subgraph_pass.cc:145] --- detect a sub-graph with 21 nodes I1103 12:30:36.635874 47754 tensorrt_subgraph_pass.cc:433] Prepare TRT engine (Optimize model structure, Select OP kernel etc). This process may cost a lot of time. W1103 12:30:36.638645 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:36.639756 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:36.640992 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. W1103 12:30:36.641988 47754 helper.h:107] Tensor DataType is determined at build time for tensors not marked as input or output. I1103 12:30:36.643693 47754 engine.cc:222] Run Paddle-TRT Dynamic Shape mode. I1103 12:30:38.005174 47754 engine.cc:471] Inspector needs TensorRT version 8.2 and after. --- Running IR pass [conv_bn_fuse_pass] --- Running IR pass [conv_elementwise_add_act_fuse_pass] --- Running IR pass [conv_elementwise_add2_act_fuse_pass] --- Running IR pass [transpose_flatten_concat_fuse_pass] --- Running analysis [ir_params_sync_among_devices_pass] I1103 12:30:38.059237 47754 ir_params_sync_among_devices_pass.cc:100] Sync params from CPU to GPU --- Running analysis [adjust_cudnn_workspace_size_pass] --- Running analysis [inference_op_replace_pass] --- Running analysis [memory_optimize_pass] I1103 12:30:38.084367 47754 memory_optimize_pass.cc:216] Cluster name : fill_constant_5.tmp_0 size: 4 I1103 12:30:38.084396 47754 memory_optimize_pass.cc:216] Cluster name : batch_norm_6.tmp_1 size: 2048 I1103 12:30:38.084419 47754 memory_optimize_pass.cc:216] Cluster name : tmp_4 size: 368640 I1103 12:30:38.084426 47754 memory_optimize_pass.cc:216] Cluster name : matmul_v2_2.tmp_0 size: 5529600 I1103 12:30:38.084434 47754 memory_optimize_pass.cc:216] Cluster name : x size: 196608 I1103 12:30:38.084441 47754 memory_optimize_pass.cc:216] Cluster name : linear_89.tmp_1 size: 1105920 --- Running analysis [ir_graph_to_program_pass] I1103 12:30:38.316370 47754 analysis_predictor.cc:1035] ======= optimize end ======= I1103 12:30:38.332847 47754 naive_executor.cc:102] --- skip [feed], feed -> x I1103 12:30:38.335194 47754 naive_executor.cc:102] --- skip [softmax_21.tmp_0], fetch -> fetch [2022/11/03 12:30:38] ppocr INFO: In PP-OCRv3, rec_image_shape parameter defaults to '3, 48, 320', if you are using recognition model with PP-OCRv2 or an older version, please set --rec_image_shape='3,32,320 W1103 12:30:38.655773 47754 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.5, Driver API Version: 11.6, Runtime API Version: 10.2 W1103 12:30:38.660480 47754 gpu_resources.cc:91] device: 0, cuDNN Version: 8.1. [2022/11/03 12:30:44] ppocr INFO: Predicts of ./OCR_dataset_test/crop_img/000_001_crop_0.jpg:('', 0.0) ... ...
一晚上醒来再次运行这个命令,报这个错了: SystemError: (Fatal) Build TensorRT cuda engine failed! Please recheck you configurations related to paddle-TensorRT. [Hint: infer_engine_ should not be null.] (at /paddle/paddle/fluid/inference/tensorrt/engine.cc:296)
想问一下直接在paddle inference 里面安装whl文件不行吗?还是说必须得手动编译
可以下载编译好的inference包,https://www.paddlepaddle.org.cn/inference/v2.3/user_guides/download_lib.html 识别模型使用PP-OCRv3识别模型可以正常运行吗
我下载的就是这里面的,包括tensorrt和cudnn都按照这里面的版本重新安装了的
现在使用tensorrt预测报的这样一个问题(不是很懂我什么都没修改,现在就不能用tensorrt预测了):
SystemError: (Fatal) Build TensorRT cuda engine failed! Please recheck you configurations related to paddle-TensorRT.
[Hint: infer_engine_ should not be null.] (at /paddle/paddle/fluid/inference/tensorrt/engine.cc:296)
我下载的就是这里面的,包括tensorrt和cudnn都按照这里面的版本重新安装了的
现在使用tensorrt预测报的这样一个问题(不是很懂我什么都没修改,现在就不能用tensorrt预测了):
SystemError: (Fatal) Build TensorRT cuda engine failed! Please recheck you configurations related to paddle-TensorRT. [Hint: infer_engine_ should not be null.] (at /paddle/paddle/fluid/inference/tensorrt/engine.cc:296)
应该是你的环境有问题
@andyjpaddle 现在环境已经好了,使用tensorrt预测也没有报错,不过模型输出是乱的,应该是tensorrt目前还不支持
tensorrt是支持的,可以参考文档 https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/doc/doc_ch/inference_ppocr.md#5-tensorrt%E6%8E%A8%E7%90%86 ,使用最新版代码即可,无需修改代码
但是SVTR模型的输出会有问题,不用tensorrt预测就没问题,使用ppocrv3的识别模型就没有问题
运行命令是啥?
类似于这样: python tools/infer/predict_rec.py --image_dir="./OCR_dataset_test/crop_img/" --rec_model_dir="./inference/rec_svtr_large_stn_en/" --use_angle_cls=false --rec_char_dict_path="./ppocr/utils/tgb_dict.txt" --rec_image_shape="3,64,256" --rec_algorithm='SVTR' --use_tensorrt=True
实际上我在这里也产生了类似的问题: https://github.com/PaddlePaddle/PaddleOCR/issues/8232
好像是某个模块不支持,修改了配置文件,我还没有尝试
可以试试先修改下配置
实际上我在这里也产生了类似的问题: