open_model_zoo icon indicating copy to clipboard operation
open_model_zoo copied to clipboard

FRCNN-resnet101 followed by single_human_pose_estimation: Input blob size Error

Open veer5551 opened this issue 4 years ago • 2 comments

Hello, Its a wonderful toolkit, its amazing how multiple frameworks are brought to common platforms! Thanks for developing! I was trying some experimentation and encountered some errors

Context: Trying to run single_human_pose _estimation_0001 via python. Reference: https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Running-single-pose-estimation-demo/td-p/1146650

According to documentation, Object detection is followed by single_human_pose_estimation. The Demo script runs fine with Object detection model: Perestrian_detection_adas_0002

After changing the Object detection Model to faster-rcnn-resnet101-coco-sparse-60-0001, it throws error,:

  File "C:\Program Files (x86)\IntelSWTools\openvino_2020.4.287\inference_engine\demos\python_demos\single_human_pose_estimation_demo\detector.py", line 11, in __init__
    assert len(self.model.input_info) == 1, "Expected 1 input blob"
AssertionError: Expected 1 input blob

Looking into the codes of detector.py in single_human_pose_estimation it needs input of size 1. while faster-rcnn-resnet101-coco-sparse-60-0001 has the input of size 2.

Any proposed approach/solution to this problem? Should faster-rcnn-resnet101-coco-sparse-60-0001 run with object_detection_demo_ssd_async demo? I am wondering why do we have different input/output sizes for different demos!

Also, if we are bringing the models to a common platform (IR), like for say the Object Detection Models, should not they have **One** input/output format and size (length)?

Thanks a lot!

veer5551 avatar Oct 01 '20 06:10 veer5551

@veer5551 thanks for your feedback note, single_human_pose_estimation_demo developed to be used (and is validated accordingly) with certain OMZ models. The list of models, supported by each OMZ demo, you can find in models.lst file, located at each demo folder. The fact is that there are lot of models solving lot of different task, implemented in different frameworks and by different peoples so it is hardly possible to have one input/output format and size for all of them. Representation of model in OpenVINO IR by itself does not change number of input/outputs or size.

vladimir-dudnik avatar Oct 01 '20 20:10 vladimir-dudnik

Thanks a lot for replying! @vladimir-dudnik

The fact is that there are lot of models solving lot of different task, implemented in different frameworks and by different peoples so it is hardly possible to have one input/output format and size for all of them.

Agree!

The list of models, supported by each OMZ demo, you can find in models.lst file, located at each demo folder

I went through the list for single_human_pose_estimation_demo and object_detection_ssd_async and found some common models as well. Also the FRCNN-resnet is working well with the object_detection_ssd_async demo, even though not mentioned in the list. Need to dive into codes for both detections.

With some modifications in detector.py, i think single_human_pose_estimation_demo should work withFRCNN as well. Need to give it a try.

Thanks a lot!

veer5551 avatar Oct 02 '20 04:10 veer5551