paddlepaddle_backend
paddlepaddle_backend copied to clipboard
Dear project maintainers, I hope this message finds you well. **I wanted to inquire about the possibility of adding support for ARM architecture in the paddlepaddle_triton project.** Currently, it seems...
So far the latest publicly available triton inference server with paddle backend is `paddlepaddle/triton_paddle:21.10` and there are lots of bug fixes since then. I'm experiencing an increasing amount of bugs...
Done: 1. Use PaddlePaddle C API 2. Use the same namespace `triton::backend::paddle` 3. Support the config auto-complete feature and ValidateModelConfig(ValidateOutputs、ValidateInputs) 4. Triton versions are supported from 21.10 to 22.x Undone...
推理paddleocr里面的re模型时,报下面的错误,这是什么原因呢? 模型路径:https://paddleocr.bj.bcebos.com/ppstructure/models/vi_layoutxlm/re_vi_layoutxlm_xfund_infer.tar 错误日志: InvalidArgumentError: The tensor Input (Input) of Slice op is not initialized. [Hint: Expected in_tensor.IsInitialized() == true, but received in_tensor.IsInitialized():0 != true:1.] (at /opt/tritonserver/Paddle/paddle/fluid/operators/slice_op.cc:147)
I want to use TRT and set `disenable_trt_tune` option is `True` but get an exception below: ```bash unknown parameter 'disenable_trt_tune` is provided for GPU execution accelerator config. Available choices are...
root@nvidia-B360M-D2V:/opt/tritonserver/backends/paddlepaddle_backend-main/paddle-lib# bash build_paddle.sh + docker build -t paddle-build . [+] Building 0.7s (3/3) FINISHED => [internal] load .dockerignore 0.2s => => transferring context: 2B 0.0s => [internal] load build definition...
 不适用TensorRT推理,配置文件如下,可以正常推理。 name: "test" backend: "paddle" input [ { name: "input" data_type: TYPE_FP32 dims: [ 3, 896, 896 ] } ] output [ { name: "conv2d_59.tmp_1" data_type: TYPE_FP32 dims: [...