lambda-packs icon indicating copy to clipboard operation
lambda-packs copied to clipboard

Performance issues about tf.function

Open DLPerf opened this issue 2 years ago • 4 comments

Hello! Our static bug checker has found a performance issue in ONNX/lambda-onnx/onnxruntime/transformers/benchmark.py: run_with_tf_optimizations (1),(2),(3) is repeatedly called in a for loop, but there is a tf.function decorated function run_in_graph_mode defined and called in run_with_tf_optimizations.

In that case, when run_with_tf_optimizations is called in a loop, the function run_in_graph_mode will create a new graph every time, and that can trigger tf.function retracing warning.

Similar problems in ONNX-ARM/lambda-onnx-arm-3.8/onnxruntime/transformers/benchmark.py.

Here is the tensorflow document to support it.

Briefly, for better efficiency, it's better to use:

@tf.function
def inner():
    pass

def outer():
    inner()  

than:

def outer():
    @tf.function
    def inner():
        pass
    inner()

Looking forward to your reply.

DLPerf avatar Mar 05 '23 05:03 DLPerf

We are investigating this kind of issues, and your answer will be of great help to our work. Can you take a look? Thank you in advance! @rvaneijk @ryfeus @Con-Mi

DLPerf avatar Mar 06 '23 02:03 DLPerf

Can you explain why you've tagged me in this issue please? Beyond fixing a 1 character typo in the README, I wouldn't consider myself as someone to tag on issues like this. Please don't tag random people.

martinpeck avatar Mar 06 '23 11:03 martinpeck

ok, sorry.

DLPerf avatar Mar 06 '23 12:03 DLPerf

Thank you for sharing! It's a library dependency so this part wasn't written by me.

ryfeus avatar Mar 07 '23 06:03 ryfeus