Luis E Avitia

Results 42 comments of Luis E Avitia

@cheny985 there was a similar issue reported and is currently under investigation, it seems some information about dynamic shapes is lost during conversion from ONNX to IR. I will share...

@cheny985 this PR [https://github.com/openvinotoolkit/openvino/pull/10684](https://github.com/openvinotoolkit/openvino/pull/10684) might fix the issue you are observing, please check it out and give it a try on your side. FYI I've performed a quick run with...

Closing this, feel free to reopen to provide the requested information or ask any questions related to this topic.

@KunalArora, by algorithm do you mean models used by [Multi Camera Multi Person demo](https://docs.openvinotoolkit.org/latest/_demos_python_demos_multi_camera_multi_person_tracking_README.html)? If so, I assume it really depends on which models for _Person detection (or person instance...

@shliar apologies for the delay in our response. I've spent some time comparing the results from the torch model, onnx model (with ONNXRT) and OpenVINO (generated models based on shared...

Closing this, I hope previous responses were sufficient to help you proceed. Feel free to reopen and ask additional questions related to this topic.

@wang7393 with a model visualizer I can see the OpenVINO IR model with dynamic shapes for input and output with values as follows:  inputs:    src: [-1,3,-1,-1]    r1i-r4i: [-1,-1,-1,-1]    downsample_ratio: [1]...

@wang7393 on my previous test I had inference working with CPU, but the same code for GPU throws _Runtime error_ as you described. ``` compiled_model = core.compile_model(model, device) ... RuntimeError:...

@wang7393 inference time is no different in my test, for IR models I see around ~19.60 ms for static model, ~19.48 ms with dynamic model and ~20.16 ms with dynamic...

@wang7393 I am seeing similar results as you based on the provided models. The epoch-0-dynamic model with dynamic vs static shapes (Model Optimizer conversion) has very similar inference times (see...