TensorFlow-Object-Detection-on-the-Raspberry-Pi
TensorFlow-Object-Detection-on-the-Raspberry-Pi copied to clipboard
Image/Pictures as Source
Great tutorial this far. My only mistake, not reading to the end before. Instead of using a camera as source, I wanted to provide existing images as input, resulting in a copy with drawn rectangles where recognition successed.
Is this possible with the given code? In my level of understanding, the basic functionality should be the same. For a camera as input, there only is a wrapper that extract singles images and calls a function?! Or is it done by video stream as input?
I modified it quickly, didn't test it. Fix any syntax errors and try it.
Find the section where it says "## MODIFICATION IS HERE" and go from there.
https://github.com/elektronika-ba/TensorFlow-Object-Detection-on-the-Raspberry-Pi/blob/master/Object_detection_picamera.py