python-tf-bodypix
python-tf-bodypix copied to clipboard
Predict on batch
Is there a way to perform batch prediction to leverage GPU? I believe it is a functionality in the JS version, but I am not sure how to do that in this python version.
It wouldn't be difficult to add a predict on batch function. Internally (except for TF Lite I believe), it is using a batch already. I think were it becomes a bit tricky is to follow it through with the post processing.
e.g. currently with a single image it looks like this:
result = bodypix_model.predict_single(image_array)
# simple mask
mask = result.get_mask(threshold=0.75)
# colored mask (separate colour for each body part)
colored_mask = result.get_colored_part_mask(mask)
For a batch it would then be something like:
batch_result = bodypix_model.predict_batch(image_array_batch)
# simple mask
mask_batch = batch_result.get_mask_batch(threshold=0.75)
# colored mask (separate colour for each body part)
colored_mask_batch = batch_result.get_colored_part_mask_batch(mask_batch)
I also wonder whether there would be any noticable speed improvement.
What have you observed when using the JS version? And what is your use-case?
Thank you for the reply.
My current use case is measuring BodyPix's performance over a dataset of 10000+ images, but given the high image resolution (1080p), it's taking ~1.3-1.5 sec/image on Colab's default CPU. Any improvement on inference time is useful (without having to downsample).
I have not tried with the JS version yet, but it seems that there is "a large performance difference" according to https://github.com/tensorflow/tfjs/issues/2197
Would you be happy to see whether you want to submit a PR to add batch support?