openvino icon indicating copy to clipboard operation
openvino copied to clipboard

[Performance]: VariadicSplit Op's CPU time is different between 2024.0.0 and 2023.0.0

Open sitabulaixizawaluduo opened this issue 9 months ago • 5 comments

OpenVINO Version

2024.0.0

Operating System

Ubuntu 22.04 (LTS)

Device used for inference

CPU

OpenVINO installation

Build from source

Programming Language

Python

Hardware Architecture

x86 (64 bits)

Model used

recommend

Model quantization

No

Target Platform

No response

Performance issue description

when I change ov version from 2023.0.0 to 2024.0.0. I used benchmark_app to test my model's performance, but I found when I set hint to "throughput" , FPS Decreased from 952 to 878. When I review the performance data, I find that the "VariadicSplit" operation had a CPU Time of 0 in version 2023.0.0, which is not the case in version 2024. What could be the reason for this?

Step-by-step reproduction

No response

Issue submission checklist

  • [X] I'm reporting a performance issue. It's not a question.
  • [X] I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
  • [ ] There is reproducer code and related data files such as images, videos, models, etc.

sitabulaixizawaluduo avatar Apr 29 '24 08:04 sitabulaixizawaluduo

Please ensure that you're using the same benchmark parameters when comparing the performance between two different versions of benchmark_app. For example, -nireq, -nstreams, -nthreads

If the scenario is still the same, please share your relevant model files.

YuChern-Intel avatar May 04 '24 03:05 YuChern-Intel

Please ensure that you're using the same benchmark parameters when comparing the performance between two different versions of benchmark_app. For example, -nireq, -nstreams, -nthreads

If the scenario is still the same, please share your relevant model files.

I have set '-nireq 24, -nstreams 24, -nthreads 24',but the result is the same as before.

sitabulaixizawaluduo avatar May 07 '24 08:05 sitabulaixizawaluduo

Could you share your relevant model files?

YuChern-Intel avatar May 08 '24 05:05 YuChern-Intel

Could you share your relevant model files?

`import numpy as np import onnx from onnx import helper from onnx import AttributeProto, TensorProto, GraphProto

index = [1, 1, 1, 1, 1, 1, 1, 1, 10, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 30, 30, 30, 1, 1, 1, 1, 1, 1, 1, 1, 30, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 30, 1, 30, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] split = np.array(index).astype(np.int32)

initializers= []

input_1 = helper.make_tensor_value_info('input_1', TensorProto.FLOAT, [256,279,81]) initializers = [onnx.helper.make_tensor( name='split', data_type=TensorProto.INT32, dims=[96], vals=split.flatten().tolist())]

outputs_list = [] for i in range(96): outputs_list.append(helper.make_tensor_value_info('output_'+str(i+1), TensorProto.FLOAT, [256,index[i],81])) attr = helper.make_attribute("", 1.)

node_def = onnx.helper.make_node( "Split", inputs=["input_1", 'split'], outputs=["output_"+str(i+1) for i in range(96)], axis=np.int32(1), ) graph_def = helper.make_graph( [node_def], 'test-model', [input_1], outputs_list, initializer=initializers, ) model_def = helper.make_model(graph_def, producer_name='onnx-example',opset_imports=[helper.make_opsetid("", 13)]) onnx.checker.check_model(model_def) onnx.save(model_def, "signal_split_13_new.onnx")

` Thanks for reply ! you can create a onnx file by this code, and use "mo" to creat ov file

sitabulaixizawaluduo avatar May 08 '24 05:05 sitabulaixizawaluduo

This issue is related to #24412

sitabulaixizawaluduo avatar May 08 '24 05:05 sitabulaixizawaluduo

Can you check with the latest 2024.1 release to see whether it has the same issue?

YuChern-Intel avatar May 15 '24 23:05 YuChern-Intel

Closing issue, feel free to re-open or start a new issue if additional assistance is needed.

YuChern-Intel avatar Jun 21 '24 06:06 YuChern-Intel