TensorFlow2.0-Examples
TensorFlow2.0-Examples copied to clipboard
Yolov3 slow?
with video_demo.py about 20% speed compared to your 1.0 repo. but thanks much for sharing!
please install tensorflow-gpu !!!
maybe it could be faster if you use frozen graph ".pb". I am not very sure about it. I will continuously update this repo, welcome to watch it !
in utils.load_weights()
got valueError: No such layer: batch_normalization_v2
with 2.0.0-beta1
w/o _v2
it works fine
Thank you. I fixed it just now.
pred_bbox = model.predict(image_data)
is much faster; not as fast as your tf1 repo though.
model(x) vs. model.predict(x) When calling model(x) directly, we are executing the graph in eager mode. For model.predict, tf actually compiles the graph on the first run and then execute in graph mode. So if you are only running the model once, model(x) is faster since there is no compilation needed. Otherwise, model.predict or using exported SavedModel graph is much faster (by 2x).
from this
Thanks a lot for your valuable information
this gives a bit of speed-up. very roughly ~ 20 fps to ~ 30 fps on ti1080ti.
feature_maps = YOLOv3(input_layer)
@tf.function
def build(feature_maps):
bbox_tensors = []
for i, fm in enumerate(feature_maps):
bbox_tensor = decode(fm, i)
bbox_tensors.append(tf.reshape(bbox_tensor, (-1, 5+num_classes)))
bbox_tensors = tf.concat(bbox_tensors, axis=0)
return bbox_tensors
bbox_tensors = build(feature_maps)
model = tf.keras.Model(input_layer, bbox_tensors)
Think I will come back this speed issue when (non-beta) v2.0 is released.
BTW, I found a small optimization in 'postprocess_boxes()' where we can filter with score_mask
first to significantly reduce the number of rows to be processed in the following. perhaps a couple of fps gain! :)
for some reason predict_on_batch(image)
is much faster! (almost twice). tried predict(image, batch_size=1)
but still slow. with this & tf.function
above I think, now, the speed is par with that of your tf1 repo. congrats & thanks!
I had a problem with tf.function
with official tf2.0-beta-gpu. but with my own custom 2.0 build (don't know which source commit I used) it works fine. I think it's correct usage and okay when the release version comes out.