MobileNet icon indicating copy to clipboard operation
MobileNet copied to clipboard

prediction time goes up gradually

Open kuro0ni opened this issue 7 years ago • 2 comments

I followed this tutorial to retrain Mobilenet and i am calling the Predict() function in a loop. I printed out the how much time it takes one prediction to happen and it starts around 0.6 seconds and goes up gradually. My PC specs are i7 4790, GTX 1070, 16 GB RAM.

Here's the python script using the model.

def load_graph(model_file):
  graph = tf.Graph()
  graph_def = tf.GraphDef()

  with open(model_file, "rb") as f:
    graph_def.ParseFromString(f.read())
  with graph.as_default():
    tf.import_graph_def(graph_def)

  return graph

def read_tensor_from_image_file(file_name, input_height=299, input_width=299,
                input_mean=0, input_std=255):
  input_name = "file_reader"
  output_name = "normalized"
  file_reader = tf.read_file(file_name, input_name)

  image_reader = tf.image.decode_jpeg(file_reader, channels = 3,
                                        name='jpeg_reader')
  float_caster = tf.cast(image_reader, tf.float32)
  dims_expander = tf.expand_dims(float_caster, 0)
  resized = tf.image.resize_bilinear(dims_expander, [input_height, input_width])
  normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std])
  sess = tf.Session()
  result = sess.run(normalized)

  return result

def load_labels(label_file):
  label = []
  proto_as_ascii_lines = tf.gfile.GFile(label_file).readlines()
  for l in proto_as_ascii_lines:
    label.append(l.rstrip())
  return label


model_file = "tactic-model//retrained_graph.pb"
label_file = "tactic-model//retrained_labels.txt"

graph = load_graph(model_file)
labels = load_labels(label_file)

def Predict(file_name):

  input_height = 224
  input_width = 224
  input_mean = 128
  input_std = 128
  input_layer = "input"
  output_layer = "final_result"

  t = read_tensor_from_image_file(file_name,
                                  input_height=input_height,
                                  input_width=input_width,
                                  input_mean=input_mean,
                                  input_std=input_std)

  input_name = "import/" + input_layer
  output_name = "import/" + output_layer
  input_operation = graph.get_operation_by_name(input_name);
  output_operation = graph.get_operation_by_name(output_name);


  with tf.Session(graph=graph) as sess:
    results = sess.run(output_operation.outputs[0],
                      {input_operation.outputs[0]: t})
  results = np.squeeze(results)

  index, Value = max(enumerate(results), key = operator.itemgetter(1))

  return labels[index]

kuro0ni avatar Sep 24 '17 16:09 kuro0ni

@Zehaos Hi i get the same issue above @br3ach3r mentioned. could you please correct us if we are doing something wrong ?

shehan-mark avatar Sep 25 '17 09:09 shehan-mark

I'm having this issue too. I think it's related to a growing graph every time the 'read_tensor_from_image_file' function is called. I found this question on StackOverflow. The solution posted there doesn't work for me, but it may help someone else.

Edit: I found a solution here which works for me. You want to wrap the body of the read_tensor_from_image_file() function with a with tf.Graph().as_default():. So your code should look something like this:

def read_tensor_from_image_file(file_name, input_height=299, input_width=299,
                input_mean=0, input_std=255):
  with tf.Graph().as_default():
    input_name = "file_reader"
    output_name = "normalized"
    file_reader = tf.read_file(file_name, input_name)

    image_reader = tf.image.decode_jpeg(file_reader, channels = 3,
                                        name='jpeg_reader')
    float_caster = tf.cast(image_reader, tf.float32)
    dims_expander = tf.expand_dims(float_caster, 0)
    resized = tf.image.resize_bilinear(dims_expander, [input_height, input_width])
    normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std])
    sess = tf.Session()
    result = sess.run(normalized)

    return result

Hope this helps.

p-ml avatar Dec 19 '17 09:12 p-ml