alpr-unconstrained
alpr-unconstrained copied to clipboard
how do we try on video?
Hi,
In here, http://sergiomsilva.com/pubs/alpr-unconstrained/ , you share output on video. Is it possible to share your experience with us. did you use single framework as you suggest in issue #32 ?
Thank you.
I successfully run it on video, I use 2 different frameworks (Darknet and Keras). Just extract the method of loading the model to the outermost and it works
I successfully run it on video, I use 2 different frameworks (Darknet and Keras). Just extract the method of loading the model to the outermost and it works
Hi, could you please explain more carefully? Because I didn't understand your answer
I agree with @Faranio . Can you give a little more detail? @Programmerwyl Thank you.
1.Load all models in the data directory 2.Read the video in a loop 3.For a frame of video,run vehicle_detection.py,license_plate_detection.py,license_plate-_ocr.py one by one note:The detect function of darknet only accepts file paths, not the image itself reference :https://github.com/pjreddie/darknet/issues/289#issuecomment-342448358
1.Load all models in the data directory 2.Read the video in a loop 3.For a frame of video,run vehicle_detection.py,license_plate_detection.py,license_plate-_ocr.py one by one note:The detect function of darknet only accepts file paths, not the image itself reference :pjreddie/darknet#289 (comment)
Can you please explain me the entire procedure using video or something. I am unable to get it.
1.Load all models in the data directory 2.Read the video in a loop 3.For a frame of video,run vehicle_detection.py,license_plate_detection.py,license_plate-_ocr.py one by one note:The detect function of darknet only accepts file paths, not the image itself reference :pjreddie/darknet#289 (comment)
Typically, input video frame rate (fps) from a camera may not match, the ALPR processing time (vehicle detection+license plate detection + OCR). So, I guess we should ignore a few frames of video, and just move on with the latest frame. So, video capturing and ALPR processing should be in two separate threads. Any thoughts?
I think I have found a good way to handle video input. Using Adrian Rosebrock's WebcamVideoStream (https://www.pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/) I got good results. Essentially, we capture video frames in one thread, as fast it arrives. I made another while loop, in which I did all the ALPR tasks, which happens at a different rate. Every new iteration of the while loop reads the latest frame available.
can u share the video inference code?
I cannot share the entire code. But, here is the relevant part
print("[INFO] sampling THREADED frames from webcam...") vs = WebcamVideoStream(src="rtmp://myServerIpAddress:1935/vod/pkTest.mp4").start() frame = vs.read() # Dummy read to satisfy the next reference mask = np.zeros(frame.shape, np.uint8) while True: try: frame = vs.read()