openvino_contrib
openvino_contrib copied to clipboard
[ARM plugin] Unsupported Slice configuration
Wav2Vec is audio recognition model. IR generated for PyTorch model from https://huggingface.co/anton-l/wav2vec2-base-ft-keyword-spotting
xml: https://drive.google.com/file/d/1398EKT21lldoYABaPzo9P6Okj7NaY8Ov/view?usp=sharing
bin: https://drive.google.com/file/d/1hshvJGYmn1q714T7JSpVUHwdaXPqDPiX/view?usp=sharing
CreateInferRequest: Arm Plugin: Nodes from torch-jit-export are not supported by plugin:
315 (Slice.0)
Seems to me, problem with a Slice layer.
<layer id="152" name="315" type="Slice" version="opset8">
<input>
<port id="0" precision="FP32">
<dim>1</dim>
<dim>768</dim>
<dim>50</dim>
</port>
<port id="1" precision="I64">
<dim>1</dim>
</port>
<port id="2" precision="I64">
<dim>1</dim>
</port>
<port id="3" precision="I64">
<dim>1</dim>
</port>
<port id="4" precision="I64">
<dim>1</dim>
</port>
</input>
<output>
<port id="5" precision="FP32" names="315">
<dim>1</dim>
<dim>768</dim>
<dim>49</dim>
</port>
</output>
</layer>
With --use_legacy_frontend model is compiled but there is an accuracy problem.
ONNX model: https://drive.google.com/file/d/10xXdXNk_AD9_X8BPSjk-P-pgRJzZl3Oi/view?usp=sharing
Test data: https://drive.google.com/file/d/1joc6OdO2uFWbduHLawuBsGc2NS3x4TAt/view?usp=sharing
test script:
import numpy as np
from openvino.runtime import Core, Tensor
with open('stop.txt', 'rt') as f:
values = f.read().strip().split('\n')
values = np.array([float(v) for v in values])
core = Core()
model = core.read_model('model.xml')
model = core.compile_model(model, 'CPU')
ireq = model.create_infer_request()
values = Tensor(values.reshape(1, -1).astype(np.float32))
ireq.set_input_tensor(values)
ireq.infer()
out = ireq.get_output_tensor()
for i, v in enumerate(out.data.reshape(-1)):
print(i, v)
Output:
x86 CPU (correct):
0 -0.18749128
1 -0.8572464
2 0.6001833
3 -1.0074832
4 0.5908476
5 -1.4191537
6 -1.1693075
7 0.59931487
8 5.8151817
9 0.35858256
10 0.11403769
11 -1.5154142
ARM64 CPU (wrong):
0 -0.026862225
1 -0.03582457
2 -0.05018249
3 0.008348335
4 -0.0019003607
5 0.018424992
6 -0.020563617
7 -0.07890658
8 -0.027402868
9 0.0029288789
10 -0.15934415
11 0.18829146
Hi @dkurt Thank you for you feedback!
It seems we could support Slice operation natively, since it's presented in ACL: https://arm-software.github.io/ComputeLibrary/latest/classarm__compute_1_1_n_e_slice.xhtml
We'll plan to support Slice.
With PR #470 , the --use_legacy_frontend model on arm64 produces:
-0.187376
-0.857422
0.599576
-1.00753
0.591569
-1.41945
-1.16889
0.599556
5.81516
0.357172
0.113763
-1.51495
With #473 we have result for mentioned test data
-0.187376 -0.857422 0.599577 -1.00753 0.591569 -1.41945 -1.16889 0.599557 5.81516 0.357172 0.113763 -1.51495
on aarch64. These results matched w/ results we have on ARM if use --use_legacy_frontend