tensorflow-onnx icon indicating copy to clipboard operation
tensorflow-onnx copied to clipboard

tf2onnx.convert.from_graph_def using all the available gpu memory

Open Orion34-lanbo opened this issue 4 years ago • 1 comments

When call tf2onnx.convert.from_graph_def API in a python process using tensorflow_gpu, from_graph_def will use all the gpu memory, make it difficult for someone else who shares the GPU card with me. I have noticed that the following code try to place the tf session on cpu device, however is does not seem to work as expected.

    with tf.device("/cpu:0"):
        with tf.Graph().as_default() as tf_graph:
            with tf_loader.tf_session(graph=tf_graph) as sess:
                tf.import_graph_def(graph_def, name='')
                frozen_graph = tf_loader.freeze_session(sess, input_names=input_names, output_names=output_names)
                input_names = tf_loader.inputs_without_resource(sess, input_names)
                frozen_graph = tf_loader.tf_optimize(input_names, output_names, graph_def)

I try to add tf.compat.v1.ConfigProto() with allow_growth=True setting, it seems to work. I wonder if you have plan to add session config setting when using tf.Session, or did I use the API wrong?

Orion34-lanbo avatar Nov 11 '21 02:11 Orion34-lanbo

Hm, this is odd - tf.device should do the trick. Let me test this a little. allow_growth=True might not be the right option because if you have a really large model that already sits in gpu memory it still would not fit.

guschmue avatar Nov 11 '21 16:11 guschmue