edgetpu-yolo icon indicating copy to clipboard operation
edgetpu-yolo copied to clipboard

Inference on video

Open constantinfite opened this issue 2 years ago • 12 comments

Hi I would like to know if it is possible to run inference on a video ? Thanks

constantinfite avatar Feb 21 '22 11:02 constantinfite

This should be straightforward using the --stream parameter. Pass in --device as your filename and it should work. It's using cv2.VideoCapture(device) to load the stream which should work for either a physical device (like a webcam) or a file on disk.

If not, it should be easy to modify:

https://docs.opencv.org/3.4/dd/d43/tutorial_py_video_display.html

Currently there isn't support for outputting a labelled video file, but that would be fairly straightforward with cv2.VideoWriter in the loop.

Let me know how you get on!

jveitchmichaelis avatar Feb 21 '22 11:02 jveitchmichaelis

Thanks for you answer ! I try this command python3 detect.py -m best-shark-yolov5s-int8.tflite device short_video.mp4 --stream but it load the wrong class. It detects bicycle and in my model I train to detect shark.

constantinfite avatar Feb 21 '22 11:02 constantinfite

You need to provide a dataset/names file (--names), you should have one of these for your dataset that you used for training. See this for example. If you need to make a copy, just update nc and the names list with your own classes.

The tflite file alone doesn't have any information about class names, it just returns an ID and this is by default mapped to COCO class names.

jveitchmichaelis avatar Feb 21 '22 11:02 jveitchmichaelis

Ok thanks And for saving the video I added this but I don't know which image I have to save after.

while True:
          fourcc = 'mp4v'  # output video codec
          fps = cam.get(cv2.CAP_PROP_FPS)
          w = int(cam.get(cv2.CAP_PROP_FRAME_WIDTH))
          h = int(cam.get(cv2.CAP_PROP_FRAME_HEIGHT))
          vid_writer = cv2.VideoWriter("exported.mp4", cv2.VideoWriter_fourcc(*fourcc), fps, (w>

          try:
            res, image = cam.read()

            if res is False:
                logger.error("Empty image received")
                break
            else:
                full_image, net_image, pad = get_image_tensor(image, input_size[0])
                pred = model.forward(net_image)

                model.process_predictions(pred[0], full_image, pad)

                tinference, tnms = model.get_last_inference_time()
                logger.info("Frame done in {}".format(tinference+tnms))
                vid_writer.write( ???? )

constantinfite avatar Feb 21 '22 11:02 constantinfite

If you modify the process_predictions function https://github.com/jveitchmichaelis/edgetpu-yolo/blob/main/edgetpumodel.py, have it return output_image (you might want to comment out the imwrite there). save_img is True by default so I think it's already annotating the images - do you get an output file created?

jveitchmichaelis avatar Feb 21 '22 12:02 jveitchmichaelis

Yea it works thanks but look at my detection. The bounding box takes all the image. The detection is not working. image

Do you have an idea why it is like this ?

constantinfite avatar Feb 21 '22 14:02 constantinfite

Please verify your model with the official yolov5 repository and check that you get the expected result (with your tflite export).

This seems like it's an issue with your model, not with this library - does it work on a simpler image of a shark, for example?

On Mon, 21 Feb 2022 at 15:40, constantinfite @.***> wrote:

Yea it works thanks but look at my detection. The bounding box takes all the image. The detection is not working. [image: image] https://user-images.githubusercontent.com/57963890/154976564-43ff0e61-0b71-46de-8fc6-9dd27a3b43e6.png

— Reply to this email directly, view it on GitHub https://github.com/jveitchmichaelis/edgetpu-yolo/issues/3#issuecomment-1046949256, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAYDMJ2SWAD3AX3255FUBPLU4JFHRANCNFSM5O6H75UQ . You are receiving this because you commented.Message ID: @.***>

jveitchmichaelis avatar Feb 21 '22 14:02 jveitchmichaelis

So I exported my model using this command python export.py --weights best-shark-yolov5s.pt --include tflite --int8. I get a file best-shark-yolov5s-int8.tflite. I run the detection on simple image but it takes very long time to do inference on simple image 18 seconds ! And I am on my computer with a RTX 2060 The detection is working at least : image

constantinfite avatar Feb 22 '22 09:02 constantinfite

OK so this is with the main Ultralytics repository and when you run it through the same image here you get rubbish?

That's strange - if you're happy to share your weights (the non-edgetpu compiled one, then I can check if it's an issue with the compilation) and that image I can take a look. I can maybe add a debug mode that doesn't run the edgetpu model (it just runs tflite) to confirm. Feel free to email me (my username at gmail) if you don't want to share publicly.

By the way, super long inference time is normal on tflite CPU for some reason (fairly sure it doesn't use the GPU at all). I'm not sure why it's so poorly optimised but I get the same with my edge models. The same thing run on the Coral should be instant.

On Tue, 22 Feb 2022 at 10:23, constantinfite @.***> wrote:

So I exported my model using this command python export.py --weights best-shark-yolov5s.pt --include tflite --int8. I get a file best-shark-yolov5s-int8.tflite. I run the detection on simple image but it takes very long time to do inference on simple image 18 seconds ! The detection is working at least : [image: image] https://user-images.githubusercontent.com/57963890/155101967-0f16754a-f557-4d07-9a04-a39466dc7ec6.png

— Reply to this email directly, view it on GitHub https://github.com/jveitchmichaelis/edgetpu-yolo/issues/3#issuecomment-1047588408, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAYDMJY54JSAHRVRPD4DMH3U4NIYVANCNFSM5O6H75UQ . You are receiving this because you commented.Message ID: @.***>

jveitchmichaelis avatar Feb 22 '22 10:02 jveitchmichaelis

It's my bad, I only run inference on coral with the tensorflow lite model. But I have to do the Edge TPU Compiler before running on the coral. I try to do the cmd edgetpu_compiler -sa yolov5s-224-int8.tflite -d -t 600 with my model but it says

Edge TPU Compiler version 16.0.384591198 Searching for valid delegate with step 1 Try to compile segment with 261 ops Started a compilation timeout timer of 600 seconds. Compilation child process completed within timeout period. Compilation failed! Try to compile segment with 260 ops Intermediate tensors: StatefulPartitionedCall:0_int8 Started a compilation timeout timer of 600 seconds. Compilation child process completed within timeout period. Compilation failed! Try to compile segment with 259 ops Intermediate tensors: model/tf_detect/Reshape_5,model/tf_detect/Reshape_1_requantized,model/tf_detect/Reshape_3_requantized

I try with the classic model yolov5s and the detection works great on the coral on a video. So It's my model the problem I think. The model was train on the yolov5-3.1 version so maybe it's deprecated

constantinfite avatar Feb 22 '22 11:02 constantinfite

I did the step of the Edge TPU Compiler on google cloud for my model and it works but when I run the detection on my coral with the edge model it has the same behaviour as previous : the detection is slow and the bounding box are not correct.

constantinfite avatar Feb 23 '22 11:02 constantinfite

Ok I'll take a look at the model you sent over when I get a chance. It's possible that the compilation for edgepu makes the model perform poorly? if we can't figure it out, you can also contact the EdgeTPU guys directly about this, they're generally quite helpful and can look at your input/output models.

jveitchmichaelis avatar Feb 23 '22 11:02 jveitchmichaelis