FastDeploy icon indicating copy to clipboard operation
FastDeploy copied to clipboard

FastDeploy with_capi 调用GpuInfer报错

Open zhengxiaoqing opened this issue 2 years ago • 1 comments

在openeular20.03上将CAPI example编译成so,调用后报错

  • 【FastDeploy版本】:fastdeploy-linux-gpu-1.0.6
  • 【编译命令】自行编译 使用CAPI cmake .. -DENABLE_ORT_BACKEND=ON -DENABLE_PADDLE_BACKEND=ON -DENABLE_OPENVINO_BACKEND=ON -DCMAKE_INSTALL_PREFIX=${PWD}/compiled_fastdeploy_sdk -DENABLE_VISION=ON -DENABLE_TEXT=ON -DWITH_CAPI=ON -DWITH_GPU=ON -DCUDA_DIRECTORY=/usr/local/cuda
  • 【系统平台】: Linux x64(openEuler 20.03 (LTS-SP3))
  • 【硬件】: NVIDIA-SMI 515.43.04 Driver Version: 515.43.04 CUDA Version: 11.7
    cudnn-linux-x86_64-8.9.0.131 NVIDIA Corporation TU104GL [Tesla T4] (rev a1) g++ (GCC) 7.3.0

问题日志及出现问题的操作流程

  • 附上详细的问题日志有助于快速定位分析
  • 【模型跑不通】 1、/root/FastDeploy/examples/vision/ocr/PP-OCR/cpu-gpu/c 先执行examples下的部署示例,包括使用examples提供的模型,都可以正常运行

2、把examples下的infer_demo编译成so共享库,另写一个main测试,也可正常运行 CMakeLists.txt PROJECT(infer_demo C) CMAKE_MINIMUM_REQUIRED (VERSION 3.10)

指定下载解压后的fastdeploy库路径

option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")

include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)

添加FastDeploy依赖头文件

include_directories(${FASTDEPLOY_INCS}) include_directories(${PROJECT_SOURCE_DIR}) add_library(infer_demo SHARED ${PROJECT_SOURCE_DIR}/infer.c)

target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})

Main.c #include "/root/FastDeploy/examples/vision/ocr/PP-OCR/cpu-gpu/c/infer.h" int main() { const char *det_model_dir1 = "/root/packet-box/config/models/ocr/det"; const char *cls_model_dir1 = "/root/packet-box/config/models/ocr/cls"; const char *rec_model_dir1 = "/root/packet-box/config/models/ocr/rec"; const char *rec_label_file1 = "/root/packet-box/config/models/ocr/ppocr_keys_v1.txt"; const char *image1 = "/root/FastDeploy/examples/vision/ocr/PP-OCR/cpu-gpu/c/build/12.jpg"; GpuInfer(det_model_dir1, cls_model_dir1, rec_model_dir1, rec_label_file1, image1); return 0; } 结果正常 [INFO] fastdeploy/runtime/runtime.cc(266)::CreatePaddleBackend Runtime initialized with Backend::PDINFER in Device::GPU. [INFO] fastdeploy/runtime/runtime.cc(266)::CreatePaddleBackend Runtime initialized with Backend::PDINFER in Device::GPU. [INFO] fastdeploy/runtime/runtime.cc(266)::CreatePaddleBackend Runtime initialized with Backend::PDINFER in Device::GPU. det boxes: [[42,413],[483,391],[484,428],[43,450]]rec text: 上海斯格威铂尔大酒店 rec score:0.980085 cls label: 0 cls score: 1.000000 det boxes: [[187,456],[399,448],[400,480],[188,488]]rec text: 打浦路15号 rec score:0.964993 cls label: 0 cls score: 1.000000 det boxes: [[23,507],[513,488],[515,529],[24,548]]rec text: 绿洲仕格维花园公寓args)0 rec score:0.993726 cls label: 0 cls score: 1.000000 det boxes: [[74,553],[427,542],[428,571],[75,582]]rec text: 打浦路252935号 rec score:0.947723 cls label: 0 cls score: 1.000000 Visualized result saved in ./vis_result.jpg

3、在自己工程中调用上面的so共享库,报错 部分代码: int reuslt = is_image_file(img_dir); if (reuslt) { const char *det_model_dir1 = "/root/packet-box/config/models/ocr/det"; const char *cls_model_dir1 = "/root/packet-box/config/models/ocr/cls"; const char *rec_model_dir1 = "/root/packet-box/config/models/ocr/rec"; const char *rec_label_file1 = "/root/packet-box/config/models/ocr/ppocr_keys_v1.txt"; const char *image1 = "/root/FastDeploy/examples/vision/ocr/PP-OCR/cpu-gpu/c/build/12.jpg"; GpuInfer(det_model_dir1, cls_model_dir1, rec_model_dir1, rec_label_file1, image1); }

相关报错日志 terminate called after throwing an instance of 'phi::enforce::EnforceNotMet' what():


C++ Traceback (most recent call last):

0 cm_dpdk_actor_exec 1 GpuInfer 2 paddle_infer::CreatePredictor(paddle::AnalysisConfig const&) 3 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&) 4 std::unique_ptr<paddle::PaddlePredictor, std::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&) 5 paddle::AnalysisConfig::fraction_of_gpu_memory_for_pool() const 6 phi::backends::gpu::SetDeviceId(int) 7 phi::backends::gpu::GetGPUDeviceCount() 8 phi::enforce::EnforceNotMet::EnforceNotMet(phi::ErrorSummary const&, char const*, int) 9 phi::enforce::GetCurrentTraceBackStringabi:cxx11


Error Message Summary:

ExternalError: CUDA error(2), out of memory. [Hint: Please search for the error code(2) on website (https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html#group__CUDART__TYPES_1g3f51e3575c2178246db0a94a430e0038) to get Nvidia's official solution and advice about CUDA Error.] (at /workspace/qiuyanjun/fastdeploy/Paddle/paddle/phi/backends/gpu/cuda/cuda_info.cc:65)

另外,使用CpuInfer也会报错 1 CpuInfer 2 paddle_infer::CreatePredictor(paddle::AnalysisConfig const&) 3 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&) 4 std::unique_ptr<paddle::PaddlePredictor, std::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&) 5 paddle::AnalysisPredictor::Init(std::shared_ptrpaddle::framework::Scope const&, std::shared_ptrpaddle::framework::ProgramDesc const&) 6 paddle::AnalysisPredictor::PrepareProgram(std::shared_ptrpaddle::framework::ProgramDesc const&) 7 paddle::AnalysisPredictor::OptimizeInferenceProgram() 8 paddle::inference::analysis::IrGraphBuildPass::RunImpl(paddle::inference::analysis::Argument*) 9 paddle::inference::analysis::IrGraphBuildPass::LoadModel(std::string const&, std::string const&, paddle::framework::Scope*, phi::Place const&, bool, bool) 10 paddle::inference::Load(paddle::framework::Executor*, paddle::framework::Scope*, std::string const&, std::string const&, bool) 11 paddle::inference::LoadPersistables(paddle::framework::Executor*, paddle::framework::Scope*, paddle::framework::ProgramDesc const&, std::string const&, std::string const&, bool) 12 paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector<std::string, std::allocator<std::string > > const&, bool, bool) 13 paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool) 14 paddle::framework::Executor::RunPartialPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, long, long, bool, bool, bool) 15 paddle::framework::CPUGarbageCollector::CPUGarbageCollector(phi::CPUPlace const&, unsigned long) 16 paddle::framework::GarbageCollector::GarbageCollector(phi::Place const&, unsigned long) 17 phi::DeviceContextPool::Get(phi::Place const&) 18 std::__future_base::_Deferred_state<std::thread::_Invoker<std::tuple<std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > ()(phi::Place const&, bool, int), phi::Place, bool, int> >, std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > >::_M_complete_async() 19 std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result<std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > >, std::__future_base::_Result_base::_Deleter>, std::thread::_Invoker<std::tuple<std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > ()(phi::Place const&, bool, int), phi::Place, bool, int> >, std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > > >::_M_invoke(std::_Any_data const&) 20 std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > paddle::platform::CreateDeviceContextphi::OneDNNContext(phi::Place const&, bool, int) 21 paddle::memory::allocation::AllocatorFacade::Instance() 22 paddle::memory::allocation::AllocatorFacade::AllocatorFacade() 23 paddle::memory::allocation::AllocatorFacadePrivate::AllocatorFacadePrivate(bool) 24 phi::backends::gpu::GetGPUDeviceCount() 25 phi::enforce::EnforceNotMet::EnforceNotMet(phi::ErrorSummary const&, char const*, int) 26 phi::enforce::GetCurrentTraceBackStringabi:cxx11


Error Message Summary:

ExternalError: CUDA error(2), out of memory. [Hint: Please search for the error code(2) on website (https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html#group__CUDART__TYPES_1g3f51e3575c2178246db0a94a430e0038) to get Nvidia's official solution and advice about CUDA Error.] (at /workspace/qiuyanjun/fastdeploy/Paddle/paddle/phi/backends/gpu/cuda/cuda_info.cc:65) 4、不带GPU编译时,CpuInfer是好使的

zhengxiaoqing avatar Oct 07 '23 01:10 zhengxiaoqing

这个报错是out of memory,显存不足

rainyfly avatar Feb 06 '24 07:02 rainyfly