How to send imgs with opencv format to dali?
here is my code:
img_objs = []
for img, boxes in zip(imgbatch, boxbatch):
for box in boxes:
x1, y1, x2, y2 = box
img_objs.append(img[y1:y2, x1:x2])
how to send img_objs to dali model ? need I pad them to the same size?
Hi @grapefruitL,
Could you provide more context to your questions? Are you talking about the DALI TRITON backend or other use cases?
@JanuszL yeah, I'm using DALI TRITON backend, what I want to do is:
- do img preprocessing using dali model, say model1
- detect person on it using yolo model, say model2
- cut out the person area then do preprocessing again with another dali model, say model3
- classify the person img with model4
when doing step 3, I got the problem above, mainly 2 questions: q1: when writing dali model script on step 3, which function shoud I use to receieve this kind of data?I use following code in step 1:
@dali.pipeline_def(batch_size=4, num_threads=1, device_id=0)
def pipe():
images = dali.fn.external_source(device="cpu", name="DALI_INPUT_0")
images = dali.fn.decoders.image(images, device="mixed", output_type=types.RGB)
q2: How to send these person imgs to the dali model? in step 1, I pad the big img like this:
lengths = list(map(lambda x, arr=arrays: arr[x].shape[0], [x for x in range(len(arrays))]))
max_len = max(lengths)
arrays = list(map(lambda arr, ml=max_len: np.pad(arr, ((0, ml - arr.shape[0]))), arrays))
for arr in arrays:
assert arr.shape == arrays[0].shape, "Arrays must have the same shape"
return np.stack(arrays)
but in step 3, they are 2-D array, I don't know how to do it
Hi @grapefruitL ,
The easiest solution to your problem would be to use dynamic batching. The idea is that you would not send whole batches from the client to the server, but your client should create batches with batch_size=1 and allow the server to perform the batching. This way you don't need to pad your samples either in model1 or in model3. In this scenario I'd advice you to turn dynamic batching on in all of the models (1-4)
As far as it is the easiest solution, the obvious downside is that we don't leverage the batching. There are many enhancements possible, however they do depend on some specifics in your use case (e.g. using ragged batches for model1, provided that it receives encoded JPEG as an input). Should the dynamic_batching idea be insufficient in terms of performance don't hesitate to let me know, we'll figure something out.
@szalpal Thanks for your advices, I'll try ragged batches in model1, if works well, I'm considering encoding person imgs to get 1D array before it was send to model3.
@grapefruitL
Just to be clear, I believe that theoretically you could also allow ragged batch in the model3 model configuration. However, this won't work in DALI Backend yet, because of lack of some features. I'll investigate using the BatchInput and let you know here about the outcome.
Closing as stalled.