dali_backend
dali_backend copied to clipboard
Error when executing Mixed operator decoders__Image when sending image binary to dali in triton
Hello, I'm trying to send image in binary format to triton server with Dali preprocessing
I'm trying to send JPEG or PNG images as bytes but getting error about Unrecognized image format. Supported formats are: JPEG, PNG, BMP, TIFF, PNM, JPEG2000 and WebP.
That how dali pipeline looks:
@dali.pipeline_def(batch_size=64, num_threads=4, device_id=0)
def pipe():
images = dali.fn.external_source(device="cpu", name="DALI_INPUT_0")
images = dali.fn.decoders.image(images, device="mixed", output_type=types.RGB)
images = dali.fn.resize(images, resize_x=224, resize_y=224, device='gpu')
return dali.fn.crop_mirror_normalize(images,
dtype=types.FLOAT16,
output_layout="CHW",
device='gpu',
mean=[0.485 * 255, 0.456 * 255, 0.406 * 255],
std=[0.229 * 255, 0.224 * 255, 0.225 * 255])
Model repository for triton server:
model_repository_dali_back/
├── dali
│ ├── 1
│ │ └── model.dali
│ └── config.pbtxt
├── ensemble_dali_vit
│ ├── 1
│ └── config.pbtxt
└── vit_base_trt
├── 1
│ └── model.plan
└── config.pbtxt
Dali config.pbtxt:
name: "dali"
backend: "dali"
max_batch_size: 64
input [
{
name: "DALI_INPUT_0"
data_type: TYPE_STRING
dims: [ -1 ]
}
]
output [
{
name: "DALI_OUTPUT_0"
data_type: TYPE_FP16
dims: [ 3, 224, 224 ]
}
]
parameters: [
{
key: "num_threads"
value: { string_value: "4" }
}
]
dynamic_batching {}
Script to send request to triton:
import numpy as np
import tritonclient.http as httpclient
from tritonclient.utils import triton_to_np_dtype
# Setup a connection with the Triton Inference Server.
triton_client = httpclient.InferenceServerClient(url="localhost:8000")
input_name = "INPUT"
output_name = "OUTPUT"
model_name = "ensemble_dali_vit"
## READ IMAGE
img_bytes = open('test_img.png', "rb").read() ## Also try with .jpeg image
img_data = np.array([img_bytes], dtype=bytes)
transformed_img = np.stack([img_data], axis=0)
# Specify the names of the input and output layer(s) of our model.
test_input = httpclient.InferInput(input_name, transformed_img.shape, datatype="BYTES")
test_input.set_data_from_numpy(transformed_img, binary_data=True)
test_output = httpclient.InferRequestedOutput(output_name, binary_data=True)
# Querying the server
results = triton_client.infer(model_name=model_name, inputs=[test_input], outputs=[test_output])
test_output = results.as_numpy(output_name)
print(test_output)
Error Log here
Traceback (most recent call last):
File "/home/proevgenii1/tensorrt/mock_test_triron.py", line 22, in <module>
results = triton_client.infer(model_name=model_name, inputs=[test_input], outputs=[test_output])
File "/root/anaconda3/envs/tensorrt/lib/python3.9/site-packages/tritonclient/http/__init__.py", line 1490, in infer
_raise_if_error(response)
File "/root/anaconda3/envs/tensorrt/lib/python3.9/site-packages/tritonclient/http/__init__.py", line 65, in _raise_if_error
raise error
tritonclient.utils.InferenceServerException: in ensemble 'ensemble_dali_vit', Runtime error: Critical error in pipeline:
Error when executing Mixed operator decoders__Image encountered:
Error in thread 1: [/opt/dali/dali/operators/decoder/nvjpeg/nvjpeg_decoder_decoupled_api.h:615] [/opt/dali/dali/image/image_factory.cc:102] Unrecognized image format. Supported formats are: JPEG, PNG, BMP, TIFF, PNM, JPEG2000 and WebP.
Stacktrace (7 entries):
[frame 0]: /opt/tritonserver/backends/dali/dali/libdali.so(+0x85422) [0x7fdb5ee1e422]
[frame 1]: /opt/tritonserver/backends/dali/dali/libdali.so(dali::ImageFactory::CreateImage(unsigned char const*, unsigned long, dali::DALIImageType)+0x204) [0x7fdb5ef2adf4]
[frame 2]: /opt/tritonserver/backends/dali/dali/libdali_operators.so(+0x96c9d4) [0x7fdb519bc9d4]
[frame 3]: /opt/tritonserver/backends/dali/dali/libdali.so(dali::ThreadPool::ThreadMain(int, int, bool, std::string const&)+0x1d0) [0x7fdb5ef01430]
[frame 4]: /opt/tritonserver/backends/dali/dali/libdali.so(+0x7470bf) [0x7fdb5f4e00bf]
[frame 5]: /usr/lib/x86_64-linux-gnu/libpthread.so.0(+0x8609) [0x7fdc20e87609]
[frame 6]: /usr/lib/x86_64-linux-gnu/libc.so.6(clone+0x43) [0x7fdc1f7fa133]
. File:
Stacktrace (6 entries):
[frame 0]: /opt/tritonserver/backends/dali/dali/libdali_operators.so(+0x595282) [0x7fdb515e5282]
[frame 1]: /opt/tritonserver/backends/dali/dali/libdali_operators.so(+0x96d53d) [0x7fdb519bd53d]
[frame 2]: /opt/tritonserver/backends/dali/dali/libdali.so(dali::ThreadPool::ThreadMain(int, int, bool, std::string const&)+0x1d0) [0x7fdb5ef01430]
[frame 3]: /opt/tritonserver/backends/dali/dali/libdali.so(+0x7470bf) [0x7fdb5f4e00bf]
[frame 4]: /usr/lib/x86_64-linux-gnu/libpthread.so.0(+0x8609) [0x7fdc20e87609]
[frame 5]: /usr/lib/x86_64-linux-gnu/libc.so.6(clone+0x43) [0x7fdc1f7fa133]
Current pipeline object is no longer valid.
It looks like the image format was not recognized correctly. Or am I doing something wrong
Hi @proevgenii !
Your code looks more-or-less OK. I believe there might be few reasons, that the image format is not recognized properly:
- We advice to load the data using
np.fromfile
function instead ofopen(..., 'rb')
(I know, many our examples showopen(...)
approach). The former is expected to be 5 times faster than the latter:
img_data = np.fromfile('test_img.png', dtype=np.uint8)
- In your configuration file, please use
TYPE_UINT8
as the input type. I believeTYPE_STRING
in Triton has a special meaning and won't work correctly as input type for DALI.
input [
{
name: "DALI_INPUT_0"
data_type: TYPE_UINT8
dims: [ -1 ]
}
]
- Please verify
transformed_img
shape. It should be something like:(1, 28172930)
(i.e.(batch_size, number_of_bytes_in_encoded_img)
) - I'm not sure you need
binary_data=True
argument inset_data_from_numpy
function
If none of these points help, please let us know, we'd try to figure something out.
Hello, @szalpal! Thanks for such a quick reply!
About 1. and 2.
I have already tried approach with np.fromfile
and using TYPE_UINT8
in configuration, and this works
But I'm using this triton server in production system where images already in byte format
And images looks like this string:
b'\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x01\x00H\...
And when I'm using open(..., 'rb')
I get similar string and so I use this method in the example above
I can save my byte images to .png file and then use np.fromfile
but this will dramatically degrade system performance
So is there are any way to send image in byte string format to dali_backend?
3.
img_bytes = open('test_img.png', "rb").read() ### len(img_bytes) = 915829
img_data = np.array([img_bytes], dtype=bytes) ### img_data.shape = (1,)
transformed_img = np.stack([img_data], axis=0) ### transformed_img.shape = (1,1)
4.
Removing binary_data=True
doesn't change anything, I'm still the same getting error
Error when executing Mixed operator decoders__Image encountered:
Error in thread 1: [/opt/dali/dali/operators/decoder/nvjpeg/nvjpeg_decoder_decoupled_api.h:615] [/opt/dali/dali/image/image_factory.cc:102] Unrecognized image format. Supported formats are: JPEG, PNG, BMP, TIFF, PNM, JPEG2000 and WebP.
@proevgenii
Got it. In that case, you're good with open
. I believe, that the TYPE_STRING
is the real problem. Please use TYPE_UINT8
combined with .astype(np.uint8)
and everything should work well.
You can use a snippet from one of our examples:
def load_image(img_path: str):
"""
Loads image as an encoded array of bytes.
This is a typical approach you want to use in DALI backend
"""
with open(img_path, "rb") as f:
img = f.read()
return np.array(list(img)).astype(np.uint8)
This function will create a byte stream, that should be passed to set_data_from_numpy
:
input = grpc.InferInput(input_name, input_shape, "UINT8")
input.set_data_from_numpy([load_image("path_to_my_image")])
You can refer to the ensemble_client for an example, which reflects quite well what you want to do. Especially functions: load_image
, load_images
, array_from_list
.
Thanks again, @szalpal ! It works perfect! And if it's possible I have one more question about the image preprocessing pipeline What is the most time-efficient pipeline, I only need two operations - resize and normalize)
Then the pipeline you've pasted at the top is a good starting point:
@dali.pipeline_def(batch_size=64, num_threads=4, device_id=0)
def pipe():
images = dali.fn.external_source(device="cpu", name="DALI_INPUT_0")
images = dali.fn.decoders.image(images, device="mixed", output_type=types.RGB)
images = dali.fn.resize(images, resize_x=224, resize_y=224, device='gpu')
return dali.fn.crop_mirror_normalize(images,
dtype=types.FLOAT16,
output_layout="CHW",
device='gpu',
mean=[0.485 * 255, 0.456 * 255, 0.406 * 255],
std=[0.229 * 255, 0.224 * 255, 0.225 * 255])
When working with images and requiring only resize and normalize, the best approach is to use fn.resize
and fn.crop_mirror_normalize
.
Hello here again! 🖖🖖
I still need to send data in the form of byte strings.
Because the np.array(list(img)).astype(np.uint8)
operation is too time-consuming
Were there any updates? I do everything as written in this issue
But I get the same error
Error when executing Mixed operator decoders__Image encountered:
Error in thread 0: [/opt/dali/dali/operators/decoder/nvjpeg/nvjpeg_decoder_decoupled_api.h:616] [/opt/dali/dali/image/image_factory.cc:100] Unrecognized image format. Supported formats are: JPEG, PNG, BMP, TIFF, PNM, JPEG2000 and WebP.
Hello @proevgenii This is orders of magnitude faster approach (I think it's actually zero-copy).
np.frombuffer(img, dtype=np.uint8)
where img
is your bytes
object.
Hi @mzient Yes it works, and it much faster than my previous method, thank you 😊
But is there any way to send binary string to dali? Or dali can't perform decoding of byte strings?
@mzient @szalpal Any updates?)