Paddle3D icon indicating copy to clipboard operation
Paddle3D copied to clipboard

IASSD 在nvidia orin上C++推理问题

Open 751565516 opened this issue 1 year ago • 1 comments

在NVIDIA Orin上编译时sudo sh compile.sh没有任何问题 但是在进行c++推理时:./build/main --model_file /home/xx/Paddle3D/export/iassd_trt2/iassd.pdmodel --params_file /home/xx/Paddle3D/export/iassd_trt2/iassd.pdiparams --lidar_file /home/xx/Downloads/000000.bin 然后报错如下: ./build/main --model_file /home/xx/Paddle3D/export/iassd_trt2/iassd.pdmodel --params_file /home/xx/Paddle3D/export/iassd_trt2/iassd.pdiparams --lidar_file /home/xx/Downloads/000000.bin --- Running analysis [ir_graph_build_pass] --- Running analysis [ir_analysis_pass] --- Running IR pass [is_test_pass] --- Running IR pass [simplify_with_basic_ops_pass] --- Running IR pass [conv_bn_fuse_pass] WARNING: Logging before InitGoogleLogging() is written to STDERR I0412 16:24:15.115052 35433 fuse_pass_base.cc:59] --- detected 24 subgraphs --- Running IR pass [conv_eltwiseadd_bn_fuse_pass] --- Running IR pass [embedding_eltwise_layernorm_fuse_pass] --- Running IR pass [multihead_matmul_fuse_pass_v2] --- Running IR pass [fused_multi_transformer_encoder_pass] --- Running IR pass [fused_multi_transformer_decoder_pass] --- Running IR pass [fused_multi_transformer_encoder_fuse_qkv_pass] --- Running IR pass [fused_multi_transformer_decoder_fuse_qkv_pass] --- Running IR pass [multi_devices_fused_multi_transformer_encoder_fuse_qkv_pass] --- Running IR pass [multi_devices_fused_multi_transformer_decoder_fuse_qkv_pass] --- Running IR pass [fuse_multi_transformer_layer_pass] --- Running IR pass [gpu_cpu_squeeze2_matmul_fuse_pass] --- Running IR pass [gpu_cpu_reshape2_matmul_fuse_pass] --- Running IR pass [gpu_cpu_flatten2_matmul_fuse_pass] --- Running IR pass [gpu_cpu_map_matmul_v2_to_mul_pass] I0412 16:24:15.639504 35433 fuse_pass_base.cc:59] --- detected 6 subgraphs --- Running IR pass [gpu_cpu_map_matmul_v2_to_matmul_pass] --- Running IR pass [matmul_scale_fuse_pass] --- Running IR pass [multihead_matmul_fuse_pass_v3] --- Running IR pass [gpu_cpu_map_matmul_to_mul_pass] --- Running IR pass [fc_fuse_pass] I0412 16:24:15.670102 35433 fuse_pass_base.cc:59] --- detected 2 subgraphs --- Running IR pass [fc_elementwise_layernorm_fuse_pass] --- Running IR pass [conv_elementwise_add_act_fuse_pass] --- Running IR pass [conv_elementwise_add2_act_fuse_pass] --- Running IR pass [conv_elementwise_add_fuse_pass] I0412 16:24:15.701077 35433 fuse_pass_base.cc:59] --- detected 3 subgraphs --- Running IR pass [transpose_flatten_concat_fuse_pass] --- Running IR pass [constant_folding_pass] --- Running IR pass [auto_mixed_precision_pass] --- Running IR pass [runtime_context_cache_pass] --- Running analysis [ir_params_sync_among_devices_pass] I0412 16:24:15.768220 35433 ir_params_sync_among_devices_pass.cc:89] Sync params from CPU to GPU --- Running analysis [adjust_cudnn_workspace_size_pass] --- Running analysis [inference_op_replace_pass] --- Running analysis [memory_optimize_pass] I0412 16:24:15.812108 35433 memory_optimize_pass.cc:219] Cluster name : split_2.tmp_1 size: 1024 I0412 16:24:15.812144 35433 memory_optimize_pass.cc:219] Cluster name : split_2.tmp_2 size: 1024 I0412 16:24:15.812152 35433 memory_optimize_pass.cc:219] Cluster name : reshape2_0.tmp_0 size: 196608 I0412 16:24:15.812158 35433 memory_optimize_pass.cc:219] Cluster name : relu_5.tmp_0 size: 33554432 I0412 16:24:15.812163 35433 memory_optimize_pass.cc:219] Cluster name : transpose_0.tmp_0 size: 65536 I0412 16:24:15.812168 35433 memory_optimize_pass.cc:219] Cluster name : relu_4.tmp_0 size: 16777216 I0412 16:24:15.812175 35433 memory_optimize_pass.cc:219] Cluster name : relu_6.tmp_0 size: 1048576 I0412 16:24:15.812178 35433 memory_optimize_pass.cc:219] Cluster name : data size: 16 I0412 16:24:15.812183 35433 memory_optimize_pass.cc:219] Cluster name : split_0.tmp_4 size: 1024 I0412 16:24:15.812188 35433 memory_optimize_pass.cc:219] Cluster name : squeeze_3.tmp_0 size: 524288 I0412 16:24:15.812193 35433 memory_optimize_pass.cc:219] Cluster name : split_0.tmp_3 size: 1024 I0412 16:24:15.812197 35433 memory_optimize_pass.cc:219] Cluster name : concat_0.tmp_0_slice_0 size: 65536 I0412 16:24:15.812202 35433 memory_optimize_pass.cc:219] Cluster name : transpose_8.tmp_0 size: 12288 I0412 16:24:15.812207 35433 memory_optimize_pass.cc:219] Cluster name : transpose_2.tmp_0 size: 49152 I0412 16:24:15.812212 35433 memory_optimize_pass.cc:219] Cluster name : split_0.tmp_1 size: 1024 I0412 16:24:15.812218 35433 memory_optimize_pass.cc:219] Cluster name : split_0.tmp_5 size: 1024 I0412 16:24:15.812223 35433 memory_optimize_pass.cc:219] Cluster name : relu_27.tmp_0 size: 8388608 I0412 16:24:15.812228 35433 memory_optimize_pass.cc:219] Cluster name : tmp_36 size: 1024 --- Running analysis [ir_graph_to_program_pass] I0412 16:24:15.987030 35433 analysis_predictor.cc:1318] ======= optimize end ======= I0412 16:24:15.994912 35433 naive_executor.cc:110] --- skip [feed], feed -> data terminate called after throwing an instance of 'phi::enforce::EnforceNotMet' what():


C++ Traceback (most recent call last):

0 paddle_infer::CreatePredictor(paddle::AnalysisConfig const&) 1 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&) 2 std::unique_ptr<paddle::PaddlePredictor, std::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&) 3 paddle::AnalysisPredictor::Init(std::shared_ptrpaddle::framework::Scope const&, std::shared_ptrpaddle::framework::ProgramDesc const&) 4 paddle::AnalysisPredictor::PrepareExecutor() 5 paddle::framework::NaiveExecutor::Prepare(paddle::framework::Scope*, paddle::framework::ProgramDesc const&, int, bool) 6 paddle::framework::NaiveExecutor::CreateOps(paddle::framework::ProgramDesc const&, int, bool) 7 paddle::framework::OpRegistry::CreateOp(paddle::framework::OpDesc const&) 8 paddle::framework::OpRegistry::CreateOp(std::string const&, std::map<std::string, std::vector<std::string, std::allocator<std::string > >, std::less<std::string >, std::allocator<std::pair<std::string const, std::vector<std::string, std::allocator<std::string > > > > > const&, std::map<std::string, std::vector<std::string, std::allocator<std::string > >, std::less<std::string >, std::allocator<std::pair<std::string const, std::vector<std::string, std::allocator<std::string > > > > > const&, paddle::framework::AttributeMap const&, paddle::framework::AttributeMap const&, bool) 9 paddle::framework::OpInfoMap::Get(std::string const&) const 10 phi::enforce::EnforceNotMet::EnforceNotMet(phi::ErrorSummary const&, char const*, int) 11 phi::enforce::GetCurrentTraceBackStringabi:cxx11


Error Message Summary:

NotFoundError: Operator (farthest_point_sample) is not registered. [Hint: op_info_ptr should not be null.] (at /home/paddle/data/xly/workspace/24117/Paddle/paddle/fluid/framework/op_info.h:154)

Aborted (core dumped)

我的环境如下: CUDA 11.4.14

TensorRT 8.4.1

cuDNN 8.4.1

OpenCV 4.5.4

GCC 9.4.0

请问这个问题该怎么解决,谢谢!

751565516 avatar Apr 12 '23 08:04 751565516