enas
enas copied to clipboard
Longer training time for each batch after some steps
Hi,
I found that the training time of each step is getting slower during the training phase. It might because there are some new operations added to the graph after sess.run().
I am thinking to use some command to fix the graph like:
tf.reset_default_graph()
sess.get_default_graph.finalize()
But my question is that the network structure is changing after searching a new architecture by the controller, so will the command above be a problem?