server icon indicating copy to clipboard operation
server copied to clipboard

The model got an issue with the OpenVINO Backend.

Open chiehpower opened this issue 6 months ago • 2 comments

Description

Hi all,

I have an IR model. I was trying to deploy it on Triton server v23.10. However, it encountered this error.

Warning: '--strict-model-config' has been deprecated! Please use '--disable-auto-complete-config' instead.
W1228 08:29:31.708605 1 pinned_memory_manager.cc:237] Unable to allocate pinned system memory, pinned memory pool will not be available: CUDA driver version is insufficient for CUDA runtime version
I1228 08:29:31.708671 1 cuda_memory_manager.cc:117] CUDA memory pool disabled
I1228 08:29:31.709538 1 model_lifecycle.cc:461] loading: openvino_model:1
I1228 08:29:31.714584 1 openvino.cc:1345] TRITONBACKEND_Initialize: openvino
I1228 08:29:31.714617 1 openvino.cc:1355] Triton TRITONBACKEND API version: 1.16
I1228 08:29:31.714631 1 openvino.cc:1361] 'openvino' TRITONBACKEND API version: 1.16
I1228 08:29:31.714664 1 openvino.cc:1445] TRITONBACKEND_ModelInitialize: openvino_model (version 1)
W1228 08:29:31.729751 1 openvino.cc:752] model layout for model openvino_model does not support batching while non-zero max_batch_size is specified
I1228 08:29:31.729823 1 openvino.cc:1470] TRITONBACKEND_ModelFinalize: delete model state
E1228 08:29:31.729845 1 model_lifecycle.cc:621] failed to load 'openvino_model' version 1: Internal: openvino error in retrieving original shapes fromoutput valid : get_shape was called on a descriptor::Tensor with dynamic shape
I1228 08:29:31.729863 1 model_lifecycle.cc:756] failed to load 'openvino_model'

I also tried to write a config.pbtxt file. Here is the content. I still got the same error.

name: "openvino_model"
backend: "openvino"
max_batch_size: 1

instance_group {
  kind: KIND_CPU
}

parameters: [
{
   key: "ENABLE_BATCH_PADDING"
   value: {
   string_value:"YES"
   }
}
]

Not sure whether it was because of config file not correct.

Is there any suggestion?

Triton Information Container. Image: nvcr.io/nvidia/tritonserver:23.10-py3

chiehpower avatar Dec 28 '23 09:12 chiehpower

@tanmayv25 any ideas?

Tabrizian avatar Jan 12 '24 23:01 Tabrizian

CC: @tanmayv25

dyastremsky avatar Feb 20 '24 18:02 dyastremsky

I got the same issue, The best answering is that openvino-backend doesnot support dynamic batchsize yet, keep that in mind. i Got 2 solution. Solution 1: you have to custom your backend. Solution 2 (Short-term): Cause backend doesnot support dynamic

  • In model.xml, find first tab data in "shape" property change it, ex: shape="32,3,112,112" -> shape="32, 3, 112, 112", and finally, replace all value -1 in tab dim to 32, ex: -1 to 32. In my case, batch size is 32, you can adjust whatever you want.
  • In config.yaml file follow (max_batch_size: 0).and (input and output [dims: [32, 3, 112, 112]])

BigVikker avatar Feb 26 '24 14:02 BigVikker