openvino icon indicating copy to clipboard operation
openvino copied to clipboard

[PT FE] aten::dot operation

Open LucaTamSapienza opened this issue 1 year ago • 5 comments

Details:

  • added aten::dot operation for pytorch models

Tickets:

  • Closes #22074

LucaTamSapienza avatar Feb 21 '24 00:02 LucaTamSapienza

Hi @mvafin

I think Multiply+ReduceSum can be slower then a single MatMul, so I would recommend using MatMul, but you can try performance benchmarking to find what is faster.

Yes, you're right. As I was reading from @eaidova , I should have used <v0::Matmul> instead of using Multiply and ReduceSum. Certainly, I can try to see which of the two is faster.

Also I think that you can just use conversion helper like this:

https://github.com/openvinotoolkit/openvino/blob/fa43c7b364b6899a90da2b3a27d77461235a5071/src/frontends/pytorch/src/op_table.cpp#L499

You mean insted of {"aten::dot", op::translate_dot}?

LucaTamSapienza avatar Feb 21 '24 12:02 LucaTamSapienza

You mean insted of {"aten::dot", op::translate_dot}?

I mean {"aten::dot", op::translate_1to1_match_2_inputs<opset10::MatMul>} so you do not need to create translate_dot since there is a generic helper that can do same

mvafin avatar Feb 21 '24 13:02 mvafin

I mean {"aten::dot", op::translate_1to1_match_2_inputs<opset10::MatMul>} so you do not need to create translate_dot since there is a generic helper that can do same

Okay, I did it. I haven't quite understood how this works. Since I use the generic helper of the Matmul class, does this allow me to avoid using translate_dot? Because in the implementation of dot.cpp, I used Matmul?

LucaTamSapienza avatar Feb 21 '24 16:02 LucaTamSapienza

@mvafin I'm not sure that the _prepare_input function works correctly. Could you please take a look?

LucaTamSapienza avatar Feb 23 '24 15:02 LucaTamSapienza

@mvafin I finally solved my problem with running the tests locally. They are causing issues due to the different types I am testing (i.e., [float32 && int64], etc.). Despite using the align_eltwise_input_types function, I am still encountering errors. Instead, if I try to include two identical types in the tests, I don't receive any errors. I believe this is due to the fact that the torch.dot function doesn't support different types.

>>> import torch
>>> x = torch.tensor([1.0, 0.0])
>>> y = torch.tensor([1, 2])
>>> torch.dot(x, y)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: dot : expected both vectors to have same dtype, but found Float and Long

LucaTamSapienza avatar Feb 23 '24 18:02 LucaTamSapienza

As you can see from the tests if i try to test the operation between 2 tensor with the same data type the test pass.

tests/layer_tests/pytorch_tests/test_dot.py::TestDot::test_dot[ ie_device:CPU - precision:FP32 - inputs:([0, 1, 2, 3, 4], [5, 6, 7, 8, 9]) - dtype1:torch.int32 - dtype2:torch.int32 ] PASSED                  [  0%]
tests/layer_tests/pytorch_tests/test_dot.py::TestDot::test_dot[ ie_device:CPU - precision:FP32 - inputs:([1, 2, 3], [4, 5, 6]) - dtype1:torch.int32 - dtype2:torch.int32 ] PASSED                              [  8%]
tests/layer_tests/pytorch_tests/test_dot.py::TestDot::test_dot[ ie_device:CPU - precision:FP32 - inputs:([1, 1, 1], [1, 1, 1]) - dtype1:torch.int32 - dtype2:torch.int32 ] PASSED                              [ 17%]
tests/layer_tests/pytorch_tests/test_dot.py::TestDot::test_dot[ ie_device:CPU - precision:FP16 - inputs:([0, 1, 2, 3, 4], [5, 6, 7, 8, 9]) - dtype1:torch.int32 - dtype2:torch.int32 ] PASSED                  [ 25%]
tests/layer_tests/pytorch_tests/test_dot.py::TestDot::test_dot[ ie_device:CPU - precision:FP16 - inputs:([1, 2, 3], [4, 5, 6]) - dtype1:torch.int32 - dtype2:torch.int32 ] PASSED                              [ 33%]
tests/layer_tests/pytorch_tests/test_dot.py::TestDot::test_dot[ ie_device:CPU - precision:FP16 - inputs:([1, 1, 1], [1, 1, 1]) - dtype1:torch.int32 - dtype2:torch.int32 ] PASSED                              [ 42%]

LucaTamSapienza avatar Feb 25 '24 18:02 LucaTamSapienza

RuntimeError: dot : expected both vectors to have same dtype, but found Float and Long

Yes, this error is printed by torch, so you do not need to support mixed types in this case

mvafin avatar Feb 26 '24 13:02 mvafin

Yes, this error is printed by torch, so you do not need to support mixed types in this case

Oh, ok. I'll go back to previous implementation and test if it works.

LucaTamSapienza avatar Feb 26 '24 18:02 LucaTamSapienza

@mvafin All tests pass successfully. Should I also add some tests to support the out parameter?

LucaTamSapienza avatar Feb 27 '24 23:02 LucaTamSapienza

@LucaTamSapienza, yes, please also add tests for out, but since translate_1to1_match_2_inputs doen't support out, please use your function translate_dot and reuse translate_1to1_match_2_inputs inside of it.

mvafin avatar Mar 01 '24 07:03 mvafin

translate_1to1_match_2_inputs

Sure @mvafin , since that the function doesn't support out as a parameter, I added a check where I use it only if I have exactly 2 inputs; otherwise, I use the normal implementation. If I try to use it without performing this check first, the tests report an error when the mode = out.

LucaTamSapienza avatar Mar 01 '24 20:03 LucaTamSapienza

build_jenkins

mvafin avatar Mar 06 '24 10:03 mvafin

build_jenkins

mvafin avatar Mar 07 '24 15:03 mvafin