onnx-mlir
onnx-mlir copied to clipboard
Adds option to specify graph entry point name
This changes the behavior when a single entry point is specified. Prior to this change, the entry point would be ignored if there was only a single entry point and run_main_graph
would be emitted. After this change, the entry point name is honored, even if there's only one entry point.
Can one of the admins verify this patch?
Can one of the admins verify this patch?
@jenkins-droid test this please
@kernhanda thank you for the patch! could you please look at why lit tests failed? You can check it locally by using make check-onnx-lit
. Thanks!
Can one of the admins verify this patch?
Can one of the admins verify this patch?
Can one of the admins verify this patch?
Can one of the admins verify this patch?
@jenkins-droid test this please
Can one of the admins verify this patch?
Can one of the admins verify this patch?
@jenkins-droid test this please
I see a big advantage of specifying a custom entry point name: users have two (single entry) onnx models and they want to run them in the same Python/C program. In this case, entry points should be different.
I could update the behavior such that run_main_graph is emitted if the user didn't override the default graph name.
I prefer this. Since we say that default graph name is main_graph
when importing an onnx model, it's clear to users. Also, Java and C interfaces are using run_main_graph
as the default entry point.
The other scenario is where an existing MLIR file is being compiled, where there is only one kernel entry point specified. In this case, the current behavior is to disregard the name of the entry point and always emit run_main_graph. TBH, this doesn't make sense to me, but if there are reasons/dependencies that rely upon this behavior, I can make the change more selective.
I am not so convince that onnx-mlir users will compile an MLIR file as input. It's mainly for onnx-mlir developers. So it looks fine to use the entry point as it is.
I see a big advantage of specifying a custom entry point name: users have two (single entry) onnx models and they want to run them in the same Python/C program. In this case, entry points should be different.
Yes, that's exactly the motivation here.
I prefer this. Since we say that default graph name is
main_graph
when importing an onnx model, it's clear to users. Also, Java and C interfaces are usingrun_main_graph
as the default entry point.
With this change, if you import a model and don't specify the graph name, it does default to main_graph
.
I am not so convince that onnx-mlir users will compile an MLIR file as input. It's mainly for onnx-mlir developers. So it looks fine to use the entry point as it is.
Do you think it's necessary to make the change so that if a MLIR file has a single entry point named something other than main_graph
that it should still result in an entry point named run_main_graph
?
^ This is the only question at the moment as it's the only difference in existing functionality. Otherwise, if you don't specify the graph name option, the behavior is the same as it is without this change.
Do you think it's necessary to make the change so that if a MLIR file has a single entry point named something other than main_graph that it should still result in an entry point named run_main_graph?
If users specify an entry point in an MLIR via onnx.EntryPoint
we should respect that and should not change to main_graph
. In summary we only allow users to set an entry point (main_graph
by default) when importing an onnx model that does not have a concept of entry point. What do you think?
Do you think it's necessary to make the change so that if a MLIR file has a single entry point named something other than main_graph that it should still result in an entry point named run_main_graph?
If users specify an entry point in an MLIR via
onnx.EntryPoint
we should respect that and should not change tomain_graph
. In summary we only allow users to set an entry point (main_graph
by default) when importing an onnx model that does not have a concept of entry point. What do you think?
I agree! This PR matches that functionality then 😄
@sstamenova requested tests that exercise the model import scenario, but I couldn't find any existing tests that do this. Any guidance on how to do this or if it's necessary would be appreciated.
@kernhanda any update on this?
@kernhanda we would like to have this functionality, so do you have time to update this PR? If not I can take it over and create a new PR. thanks!
Can one of the admins verify this patch?