FastDeploy icon indicating copy to clipboard operation
FastDeploy copied to clipboard

Fastdeploy 1.0.0 Docker container GPU crashing

Open akansal1 opened this issue 2 years ago • 1 comments

Environment

FastDeploy version: 1.0.0 OS Platform: Linux x64 Hardware: Nvidia T4 Program Language: e.g. Python 3.8

Problem description

I am trying to run PP-OCR V3 example via docker serving using following image:

docker pull paddlepaddle/fastdeploy:1.0.0-gpu-cuda11.4-trt8.4-21.10

But on inference triton server is crashing with following traceback:

Signal (11) received.
 0# 0x000055A10674A8A9 in fastdeployserver
 1# 0x00007F46B8676210 in /usr/lib/x86_64-linux-gnu/libc.so.6
 2# fastdeploy::AdaptivePool2dKernel::CpuAdaptivePool(std::vector<long, std::allocator<long> > const&, std::vector<long, std::allocator<long> > const&, float const*, float*) in /opt/fastdeploy/lib/libfastdeploy_runtime.so.1.0.0
 3# fastdeploy::AdaptivePool2dKernel::Compute(OrtKernelContext*) in /opt/fastdeploy/lib/libfastdeploy_runtime.so.1.0.0
 4# Ort::CustomOpBase<fastdeploy::AdaptivePool2dOp, fastdeploy::AdaptivePool2dKernel>::CustomOpBase()::{lambda(void*, OrtKernelContext*)#8}::operator()(void*, OrtKernelContext*) const in /opt/fastdeploy/lib/libfastdeploy_runtime.so.1.0.0
 5# Ort::CustomOpBase<fastdeploy::AdaptivePool2dOp, fastdeploy::AdaptivePool2dKernel>::CustomOpBase()::{lambda(void*, OrtKernelContext*)#8}::_FUN(void*, OrtKernelContext*) in /opt/fastdeploy/lib/libfastdeploy_runtime.so.1.0.0
 6# 0x00007F464319CE67 in /opt/fastdeploy/third_libs/install/onnxruntime/lib/libonnxruntime.so.1.12.0
 7# 0x00007F4643995154 in /opt/fastdeploy/third_libs/install/onnxruntime/lib/libonnxruntime.so.1.12.0
 8# 0x00007F4643977ADA in /opt/fastdeploy/third_libs/install/onnxruntime/lib/libonnxruntime.so.1.12.0
 9# 0x00007F464397AFC1 in /opt/fastdeploy/third_libs/install/onnxruntime/lib/libonnxruntime.so.1.12.0
10# 0x00007F46431CED46 in /opt/fastdeploy/third_libs/install/onnxruntime/lib/libonnxruntime.so.1.12.0
11# 0x00007F46431CF0A8 in /opt/fastdeploy/third_libs/install/onnxruntime/lib/libonnxruntime.so.1.12.0
12# 0x00007F464315E260 in /opt/fastdeploy/third_libs/install/onnxruntime/lib/libonnxruntime.so.1.12.0
13# Ort::Session::Run(Ort::RunOptions const&, Ort::IoBinding const&) in /opt/fastdeploy/lib/libfastdeploy_runtime.so.1.0.0
14# fastdeploy::OrtBackend::Infer(std::vector<fastdeploy::FDTensor, std::allocator<fastdeploy::FDTensor> >&, std::vector<fastdeploy::FDTensor, std::allocator<fastdeploy::FDTensor> >*, bool) in /opt/fastdeploy/lib/libfastdeploy_runtime.so.1.0.0
15# fastdeploy::Runtime::Infer() in /opt/fastdeploy/lib/libfastdeploy_runtime.so.1.0.0
16# 0x00007F467E1A2134 in /opt/tritonserver/backends/fastdeploy/libtriton_fastdeploy.so
17# 0x00007F467E1A5B96 in /opt/tritonserver/backends/fastdeploy/libtriton_fastdeploy.so
18# TRITONBACKEND_ModelInstanceExecute in /opt/tritonserver/backends/fastdeploy/libtriton_fastdeploy.so
19# 0x00007F46B920283A in /opt/tritonserver/bin/../lib/libtritonserver.so
20# 0x00007F46B920304D in /opt/tritonserver/bin/../lib/libtritonserver.so
21# 0x00007F46B90B7801 in /opt/tritonserver/bin/../lib/libtritonserver.so
22# 0x00007F46B91FCDC7 in /opt/tritonserver/bin/../lib/libtritonserver.so
23# 0x00007F46B8A64DE4 in /usr/lib/x86_64-linux-gnu/libstdc++.so.6
24# 0x00007F46B8EE2609 in /usr/lib/x86_64-linux-gnu/libpthread.so.0
25# clone in /usr/lib/x86_64-linux-gnu/libc.so.6

akansal1 avatar Nov 30 '22 13:11 akansal1

this is a bug, we have already fix it. Temporary solutions is modify the config.pbtxt of 3 runtime model. add this code in your config.pbtxt

optimization { execution_accelerators { gpu_execution_accelerator : [ { name : "paddle" parameters { key: "cpu_threads" value: "4" } } ] } } refer to this: https://github.com/PaddlePaddle/FastDeploy/pull/764/files#diff-337838d37ee936ae9ff0ad9b7862475c27dc4cf8733bd9b03641bf376f77c20b

HexToString avatar Dec 01 '22 06:12 HexToString