runtime icon indicating copy to clipboard operation
runtime copied to clipboard

OpenVX-NN & Runtime/MLIR

Open peter197321 opened this issue 4 years ago • 5 comments

Hello,

is there somehow possible to have OpenVX-NN as a backend for tfrt/mlir?

peter197321 avatar Jun 20 '20 11:06 peter197321

It should be possible down the road, but probably premature to tackle right now. A general approach could be:

  1. Wrap the sepcific backend APIs as TFRT BEF kernels
  2. For graph execution, write compiler passes (or declarative rewrite patterns) to lower a device-neutral input program (e.g. TF2 model) to the target kernels, triggering the backend capabilities. In this case, the compiler rewrite would transform a sub-graph to an OpenVX graph, and also add (BEF) kernels to run that graph.
  3. For eager execution, add an OpHandler for OpenVX, and use it to implement each TFRT (TF) op, by calling into OpenVX capability.

The type of effort would be more ready to carry out, after we share the TFRT GPU (e.g. for NVIDIA CUDA and other kinds of GPUs) related design and code. We will provide an update on this topic in Q3.

mhong avatar Jun 20 '20 14:06 mhong

We have an ONNX-MLIR project that we actively work on. If OpenVX-NN has operations that corresponds to underlying ONNX operations, then building a set of rules to convert an OpenVX to ONNX is pretty easy in MLIR. Something to consider.

AlexandreEichenberger avatar Jun 20 '20 14:06 AlexandreEichenberger

you consider the ONNX-MLIR to be properly mapped, quote "building a set of rules to convert an OpenVX to ONNX" or rather there'll be a workflow that generates OpenVX inference code?

peter197321 avatar Jun 20 '20 15:06 peter197321

Peter,

Looking at your question, you appear more interested in using your backend for TF. I misread your question, thinking that you wanted to use MLIR to go from OpenVX to the normal MLIR compilation and TFRT. Sorry.

AlexandreEichenberger avatar Jun 22 '20 12:06 AlexandreEichenberger

can be either way

On Mon, Jun 22, 2020 at 2:55 PM Alexandre Eichenberger < [email protected]> wrote:

Peter,

Looking at your question, you appear more interested in using your backend for TF. I misread your question, thinking that you wanted to use MLIR to go from OpenVX to the normal MLIR compilation and TFRT. Sorry.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/tensorflow/runtime/issues/16#issuecomment-647500958, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALGLVQVMXH3E4BCSWNSKZYLRX5IC5ANCNFSM4ODLGB7Q .

http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail Virus-free. www.avg.com http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

peter197321 avatar Jun 22 '20 16:06 peter197321