tutorials
tutorials copied to clipboard
Can a PyTorch model be saved, loaded, frozen into TF
I saw that there is a converter for exporting PyTorch models to ONNX format. There also seems to be a converter for TF to import that format. But, in the given example for importing ONNX format into TF, I was confused what methods are available for the tf_prep
object assigned intf_rep = prepare(model)
. In particular, I wanted to be able to freeze the "model" object using graph_def approach, kind of similar to the following code. How should I change this code to use the model object?
with tf.Session(graph=tf.Graph()) as sess: \
# We import the meta graph in the current default Graph
saver = tf.train.import_meta_graph(input_checkpoint + '.meta', clear_devices=clear_devices)
# We restore the weights
saver.restore(sess, input_checkpoint)
# We use a built-in TF helper to export variables to constants
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess, # The session is used to retrieve the weights
tf.get_default_graph().as_graph_def(), # The graph_def is used to retrieve the nodes
output_node_names.split(",") # The output node names are used to select the usefull nodes
)
# Finally we serialize and dump the output graph to the filesystem
with tf.gfile.GFile(output_graph, "wb") as f:
f.write(output_graph_def.SerializeToString())
print("%d ops in the final graph." % len(output_graph_def.node))``` \
@hodaraad I have created a pull request for a end-to-end tutorial that show how to import a PyTorch model to TensorFlow. https://github.com/onnx/tutorials/pull/68 May you check it out and give me some feedback please. Thanks.