openvino
openvino copied to clipboard
Unable to get working OpenVINO GPU accelleration on Ubuntu 24.04, i5-1240p
As https://github.com/openvinotoolkit/openvino/issues/24797 closed, i reopened one
I have some problem, and i found some models can run on GPU(like hello_classification sample), some not. When hardcode core.compile_model with GPU, some error occurs:
...
compiled_model = core.compile_model(model, 'GPU')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/myproject/.venv/lib/python3.12/site-packages/openvino/runtime/ie_api.py", line 543, in compile_model
super().compile_model(model, device_name, {} if config is None else config),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Exception from src/inference/src/cpp/core.cpp:104:
Exception from src/inference/src/dev/plugin.cpp:53:
Check 'false' failed at src/plugins/intel_gpu/src/plugin/program_builder.cpp:185:
[GPU] ProgramBuilder build failed!
Program build failed(0_part_15):
5:13882:2: error: use of undeclared identifier 'eltwise0_data0'
FUSED_OPS_VEC;
^
5:12712:2: note: expanded from macro 'FUSED_OPS_VEC'
....
5:13893:2: error: use of undeclared identifier 'eltwise0_data0'
FUSED_OPS_SCALAR;
^
5:12800:2: note: expanded from macro 'FUSED_OPS_SCALAR'
openvino version: 2024.3.0
Originally posted by @0312birdzhang in https://github.com/openvinotoolkit/openvino/issues/24797#issuecomment-2308893859
Hello, if possible, could you share the model so that we can reproduce the issue from our end?
Hello, if possible, could you share the model so that we can reproduce the issue from our end?
Yeah, sure. It's openpilot's supercombox.onnx, i wrote a litte demo, you can test it.
@isanghao can you take a look it or do you have some suggestion?
Hi 0312birdzhang,
The issue was reproduced on our side. I guess this is happening because of the self-multiplication operation after convolution. If you can modify the model, could you replace it with single pow(square) operation? That would be the immediate workaround. I guess regular fix would take some more time.
I'm a newbie on it, this model is from comma.ai and they only open source the onnx model. If you mean just modify this model, can you point me more?
Hi @0312birdzhang I created PR to fix the issue: https://github.com/openvinotoolkit/openvino/pull/26381 You can build OpenVINO with the PR. Alternatively, you can wait for the PR to be merged and use the nightly build.
Could you help us to validate the fix?
Yes, sure, i'm building now
Seems like it's working!
100 times infer_request with CPU:
real 0m2.092s
user 0m14.716s
sys 0m3.144s
100 times infer_request with GPU:
real 0m1.865s
user 0m2.066s
sys 0m2.051s
Another thing is an error log printed: [ERROR] 17:09:48.657 [NPUBackends] Cannot find backend for inference. Make sure the device is available., if i change core.compile_model() to AUTO, then it will print twice.
Thanks for confirmation! For error log, could you file a separate ticket for tracking purpose?
For this ticket itself, is it OK to close this ticket?
Yeah, than you very much❤️