openvino
openvino copied to clipboard
[Good First Issue]: Include HW plugin properties in supported_properties in BATCH Plugin
Context
BATCH Plugin should include HW Plugin compiled_model properties in supported_properties, and get property value from HW plugin compiled_model if user request
benchmark_app -d GPU -hint throughput -m <batchable model, mobilenet-v2 for example> {noformat} [ INFO ] Model: [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 64 [ INFO ] SUPPORTED_METRICS: OPTIMAL_NUMBER_OF_INFER_REQUESTS SUPPORTED_METRICS NETWORK_NAME SUPPORTED_CONFIG_KEYS EXECUTION_DEVICES [ INFO ] NETWORK_NAME: torch-jit-export [ INFO ] SUPPORTED_CONFIG_KEYS: AUTO_BATCH_TIMEOUT [ INFO ] EXECUTION_DEVICES: OCL_GPU.0 [ INFO ] AUTO_BATCH_TIMEOUT: 1000{noformat} The expectation is that it includes GPU properties in SUPPORTED_METRICS.
What needs to be done?
- [ ] Get properties from HW plugin compiled_model & add to BATCH plugin supported_properties. u can get HW compiled model properties with m_compiled_model_without_batch & add to the return list. https://github.com/openvinotoolkit/openvino/blob/f9605cd8d4c488ad405397bc4294e35a403ae803/src/plugins/auto_batch/src/compiled_model.cpp#L205
- [ ] Add test.
Example Pull Requests
No response
Resources
- Contribution guide - start here!
- Intel DevHub Discord channel - engage in discussions, ask questions and talk to OpenVINO developers
Contact points
@zhaixuejun1993 @riverlijunjie @ilya-lavrenov @vurusovs @peterchen-intel
Ticket
CVS-133381
.take
Thank you for looking into this issue! Please let us know if you have any questions or require any help.
Hello @kumar-sanjeeev, are you still working on this? Is there anything we could help you with?
.take
Thank you for looking into this issue! Please let us know if you have any questions or require any help.
and what is meant by "BATCH plugin supported_properties"? Is it expected to include the "m_compiled_model_without_batch" properties to be included in the list of supported properties?
I managed to recreate the results thus far. I ran benchmark_app with and without the GPU and the results are as follows(Only the part mentioned in this issue):
With GPU: [Step 8/11] Querying optimal runtime parameters [ INFO ] Model: [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 4 [ INFO ] NETWORK_NAME: TensorFlow_Frontend_IR [ INFO ] EXECUTION_DEVICES: GPU.0 [ INFO ] AUTO_BATCH_TIMEOUT: 1000
Without GPU: [ INFO ] Model: [ INFO ] NETWORK_NAME: TensorFlow_Frontend_IR [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 4 [ INFO ] NUM_STREAMS: 4 [ INFO ] AFFINITY: NONE [ INFO ] INFERENCE_NUM_THREADS: 16 [ INFO ] PERF_COUNT: NO [ INFO ] INFERENCE_PRECISION_HINT: f32 [ INFO ] PERFORMANCE_HINT: THROUGHPUT [ INFO ] EXECUTION_MODE_HINT: PERFORMANCE [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 [ INFO ] ENABLE_CPU_PINNING: YES [ INFO ] SCHEDULING_CORE_TYPE: ANY_CORE [ INFO ] ENABLE_HYPER_THREADING: YES [ INFO ] EXECUTION_DEVICES: CPU [ INFO ] CPU_DENORMALS_OPTIMIZATION: NO [ INFO ] LOG_LEVEL: LOG_NONE [ INFO ] CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1 [ INFO ] DYNAMIC_QUANTIZATION_GROUP_SIZE: 0 [ INFO ] KV_CACHE_PRECISION: f16
I am assuming this is the level of detail expected to be printed for the GPU as well?
The changes I am making in compiled_model.cpp of auto_batch plugin are not being reflected when running the benchmark_app command. Does build_samples_msvc not compile the code again?
The changes I am making in compiled_model.cpp of auto_batch plugin are not being reflected when running the benchmark_app command. Does build_samples_msvc not compile the code again?
The script builds APP only, doesn't build the libraries.
Alright, @peterchen-intel, I have a question regarding what exactly is required in this issue. Is it required to add the compiled model properties from m_compiled_model_without_batch to the supported properties?
Alright, @peterchen-intel, I have a question regarding what exactly is required in this issue. Is it required to add the compiled model properties from m_compiled_model_without_batch to the supported properties?
@anzr299 I think you are right, just add the compiled model properties from m_compiled_model_without_batch to the supported properties
Alright, @peterchen-intel, I have a question regarding what exactly is required in this issue. Is it required to add the compiled model properties from m_compiled_model_without_batch to the supported properties?
@anzr299 I think you are right, just add the compiled model properties from m_compiled_model_without_batch to the supported properties
Alright, and in regards to testing it, I can build the library and then generate the benchmark app?
Hello @anzr299, are you still working on that issue? Do you need any help?
I'm unassigning myself since I am focusing on a smaller subset of problems. Sorry for the trouble.
No worries @anzr299, come back anytime, we have a lot of issues open. :)
.take
Thank you for looking into this issue! Please let us know if you have any questions or require any help.