[GPU] Enable custom op with dynamic shape
Details:
- Wrapper gws and lws calculation, and move it to custom_gpu_primitive.hpp
- Pass OP(customer provided) to cldnn::custom_gpu_primitive, we need to call it(new_op->validate_and_infer_types()) to inference real output shape.
- WIP: Test case.
Tickets:
- CVS-163880
By the way, the current version implements dynamism through kernel recompilation for each new dynamic shape configuration. However, we could support a shape_agnostic kernel version that can be compiled once and reused with any shape. That said, this implementation looks like a good step toward supporting dynamic custom operations
By the way, the current version implements dynamism through kernel recompilation for each new dynamic shape configuration. However, we could support a shape_agnostic kernel version that can be compiled once and reused with any shape. That said, this implementation looks like a good step toward supporting dynamic custom operations
hi @sshlyapn , I got your idea, you just want to compile the kernel once for dynamic shape, customer should make sure their kernel is shape_agnostic. Currently I have no good idea about this.
1: Maybe we can merge this solution as first step.(I still need to add test case.) 2: Or we add an item to kernel description(xml, for example: https://docs.openvino.ai/2025/documentation/openvino-extensibility/custom-gpu-operations.html#example-configuration-file), let customer tell us his kernel is shape_agnostic, so we just compile it once.(gws, lws, should be const, maybe ...)
Because we need to update public interface and wiki for the solution 2, so I'd like to merge current solution as first step, and then enable shape_agnostic kernel in the next step. What's your idea? @sshlyapn
@peterchen-intel Added test case. Remove draft and ready for review.
By the way, the current version implements dynamism through kernel recompilation for each new dynamic shape configuration. However, we could support a shape_agnostic kernel version that can be compiled once and reused with any shape. That said, this implementation looks like a good step toward supporting dynamic custom operations
hi @sshlyapn , I got your idea, you just want to compile the kernel once for dynamic shape, customer should make sure their kernel is shape_agnostic. Currently I have no good idea about this.
1: Maybe we can merge this solution as first step.(I still need to add test case.) 2: Or we add an item to kernel description(xml, for example: https://docs.openvino.ai/2025/documentation/openvino-extensibility/custom-gpu-operations.html#example-configuration-file), let customer tell us his kernel is shape_agnostic, so we just compile it once.(gws, lws, should be const, maybe ...)
Because we need to update public interface and wiki for the solution 2, so I'd like to merge current solution as first step, and then enable shape_agnostic kernel in the next step. What's your idea? @sshlyapn
Hi @sshlyapn , @peterchen-intel I just registered the dynamic_shape kernel for custom_op, I find it works for shape agnostic now. Please help review again.
@xipingyan , can you please check CI test failures?
ov_gpu_func_tests-0 INFO: FAILED TESTS (1/39269):
ov_gpu_func_tests-0 INFO: 2909 ms: ov_gpu_func_tests smoke_CustomOpDynamic.Accuracy
@xipingyan , can you please check CI test failures?
ov_gpu_func_tests-0 INFO: FAILED TESTS (1/39269): ov_gpu_func_tests-0 INFO: 2909 ms: ov_gpu_func_tests smoke_CustomOpDynamic.Accuracy
I tested pass at local, ci log show:
Error loading custom layer configuration file: /home/jenkins/agent/workspace/private-ci/ie/build-linux-ubuntu22/b/repos/openvino/src/plugins/intel_gpu/tests/functional/custom_op/custom_op_dynamic.xml, File was not found at offset 0
The path to the XML file seems correct. I'll need to double-check that.