frigate icon indicating copy to clipboard operation
frigate copied to clipboard

[Support]: OpenVINO Does not work with YOLOX model

Open aeozyalcin opened this issue 2 years ago • 14 comments

Describe the problem you are having

Awesome job getting OpenVINO into Frigate @NateMeyer! I just got a thin client with a J5005, and OpenVINO support was perfectly timed!

I tried using the yolox-tiny model with OpenVINO, and got the following error:

2023-01-21 20:32:53.946348978  [2023-01-21 12:32:53] detector.ov                    INFO    : Starting detection process: 376
2023-01-21 20:32:55.017048607  [2023-01-21 12:32:55] frigate.detectors.plugins.openvino INFO    : Model Input Shape: {1, 3, 416, 416}
2023-01-21 20:32:55.017056288  [2023-01-21 12:32:55] frigate.detectors.plugins.openvino INFO    : Model Output-0 Shape: {1, 3549, 85}
2023-01-21 20:32:55.017058803  [2023-01-21 12:32:55] frigate.detectors.plugins.openvino INFO    : Model has 1 Output Tensors
2023-01-21 20:32:55.019078340  Process detector:ov:
2023-01-21 20:32:55.042640498  Traceback (most recent call last):
2023-01-21 20:32:55.042646939    File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
2023-01-21 20:32:55.042648687      self.run()
2023-01-21 20:32:55.042650805    File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run
2023-01-21 20:32:55.042652584      self._target(*self._args, **self._kwargs)
2023-01-21 20:32:55.042654630    File "/opt/frigate/frigate/object_detection.py", line 120, in run_detector
2023-01-21 20:32:55.042656489      detections = object_detector.detect_raw(input_frame)
2023-01-21 20:32:55.042658500    File "/opt/frigate/frigate/object_detection.py", line 71, in detect_raw
2023-01-21 20:32:55.042660363      return self.detect_api.detect_raw(tensor_input=tensor_input)
2023-01-21 20:32:55.042662445    File "/opt/frigate/frigate/detectors/plugins/openvino.py", line 45, in detect_raw
2023-01-21 20:32:55.042664053      infer_request.infer([tensor_input])
2023-01-21 20:32:55.042666257    File "/usr/local/lib/python3.9/dist-packages/openvino/runtime/ie_api.py", line 151, in infer
2023-01-21 20:32:55.042667807      return super().infer(
2023-01-21 20:32:55.042670418  RuntimeError: Can't set input blob with name: images, because model input (shape={1,3,416,416}) and blob (shape=(1.416.416.3)) are incompatible

Looks like there is an incompatibility between how Frigate sends frames to the yolo model, and what the model expects as its input. When I use the included ssdlite_mobilenet_v2 model, object detection is working as expected.

P.S.: I also tried Intel's own person-detection-0201 model. Frigate ran without errors, but no detections were showing up. I don't think that's related, but just a data point.

Version

0.12.0-12D51D3

Frigate config file

mqtt:
  enabled: False

detectors:
  ov:
    type: openvino
    device: GPU
    model:
      path: /media/frigate/openvino-model/yolox-tiny.xml   

model:
  width: 416
  height: 416
  input_tensor: nhwc
  input_pixel_format: bgr
  labelmap_path: /media/frigate/openvino-model/coco_80cl.txt 

cameras:
  front_driveway:
    objects:
      track:
        - person
    ffmpeg:
      inputs:
        - path: http://192.168.0.241/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=admin&password=password
          roles:
            - detect
            - restream
          hwaccel_args: preset-intel-qsv-h264
          input_args:
            - -strict
            - experimental
            - -analyzeduration
            - 1000M
            - -probesize
            - 1000M
            - -rw_timeout
            - "5000000"

    detect:
      enabled: True
      width: 896
      height: 512
      fps: 10

    zones:

      lower_driveway:
        coordinates: 480,191,486,81,320,92,191,136,185,178,0,242,0,377
        objects:
          - person
        filters:
          person:
            min_area: 1000
            max_area: 10000
      middle_driveway:
        coordinates: 83,400,704,326,784,228,798,92,478,69,474,182,0,372
        objects:
          - person
        filters:
          person:
            min_area: 2000
            max_area: 20000          
      upper_driveway:
        coordinates: 896,512,896,123,783,89,776,227,703,326,83,400,0,372,0,512
        objects:
          - person
        filters:
          person:
            min_area: 4500
            max_area: 100000   

    motion:
      mask:
        - 201,113,398,36,896,38,896,36,896,0,0,0,0,165
      improve_contrast: True
      contour_area: 25

    snapshots:
      # Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below)
      # This value can be set via MQTT and will be updated in startup based on retained value
      enabled: True
      # Optional: save a clean PNG copy of the snapshot image (default: shown below)
      clean_copy: True
      # Optional: print a timestamp on the snapshots (default: shown below)
      timestamp: False
      # Optional: draw bounding box on the snapshots (default: shown below)
      bounding_box: True
      # Optional: crop the snapshot (default: shown below)
      crop: False
      required_zones:
        - upper_driveway
        - middle_driveway
        - lower_driveway
      # Optional: height to resize the snapshot to (default: original size)
      # height: 175
      # Optional: Restrict snapshots to objects that entered any of the listed zones (default: no required zones)
        # - driveway_entrance
      # Optional: Camera override for retention settings (default: global values)
      retain:
        # Required: Default retention days (default: shown below)
        default: 2
        # objects:
        #   car: 7
    record:
      # Optional: Enable recording (default: global setting)
      enabled: True
      # Optional: Number of days to retain (default: global setting)
      retain:
        days: 1
      events:
        objects:
          - person
        required_zones:
          - upper_driveway
          - middle_driveway
          - lower_driveway
        retain:
          default: 2

Relevant log output

2023-01-21 20:32:53.946348978  [2023-01-21 12:32:53] detector.ov                    INFO    : Starting detection process: 376
2023-01-21 20:32:55.017048607  [2023-01-21 12:32:55] frigate.detectors.plugins.openvino INFO    : Model Input Shape: {1, 3, 416, 416}
2023-01-21 20:32:55.017056288  [2023-01-21 12:32:55] frigate.detectors.plugins.openvino INFO    : Model Output-0 Shape: {1, 3549, 85}
2023-01-21 20:32:55.017058803  [2023-01-21 12:32:55] frigate.detectors.plugins.openvino INFO    : Model has 1 Output Tensors
2023-01-21 20:32:55.019078340  Process detector:ov:
2023-01-21 20:32:55.042640498  Traceback (most recent call last):
2023-01-21 20:32:55.042646939    File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
2023-01-21 20:32:55.042648687      self.run()
2023-01-21 20:32:55.042650805    File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run
2023-01-21 20:32:55.042652584      self._target(*self._args, **self._kwargs)
2023-01-21 20:32:55.042654630    File "/opt/frigate/frigate/object_detection.py", line 120, in run_detector
2023-01-21 20:32:55.042656489      detections = object_detector.detect_raw(input_frame)
2023-01-21 20:32:55.042658500    File "/opt/frigate/frigate/object_detection.py", line 71, in detect_raw
2023-01-21 20:32:55.042660363      return self.detect_api.detect_raw(tensor_input=tensor_input)
2023-01-21 20:32:55.042662445    File "/opt/frigate/frigate/detectors/plugins/openvino.py", line 45, in detect_raw
2023-01-21 20:32:55.042664053      infer_request.infer([tensor_input])
2023-01-21 20:32:55.042666257    File "/usr/local/lib/python3.9/dist-packages/openvino/runtime/ie_api.py", line 151, in infer
2023-01-21 20:32:55.042667807      return super().infer(
2023-01-21 20:32:55.042670418  RuntimeError: Can't set input blob with name: images, because model input (shape={1,3,416,416}) and blob (shape=(1.416.416.3)) are incompatible

FFprobe output from your camera

"[\n  {\n    \"return_code\": 0,\n    \"stderr\": {},\n    \"stdout\": {\n      \"programs\": [],\n      \"streams\": [\n        {\n          \"avg_frame_rate\": \"19/1\",\n          \"codec_long_name\": \"H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10\",\n          \"height\": 512,\n          \"width\": 896\n        },\n        {\n          \"avg_frame_rate\": \"0/0\",\n          \"codec_long_name\": \"AAC (Advanced Audio Coding)\"\n        }\n      ]\n    }\n  }\n]"

Frigate stats

No response

Operating system

Other Linux

Install method

HassOS Addon

Coral version

Other

Network connection

Wired

Camera make and model

Reolink RLC-810A

Any other information that may be helpful

No response

aeozyalcin avatar Jan 22 '23 04:01 aeozyalcin

The model input can be fixed by seeing input_tensor: nchw, but the OpenVino detector expects the output tensor to be an SSD format and will error out with the YoloX output shape.

NateMeyer avatar Jan 22 '23 05:01 NateMeyer

Awesome, thank you for the reply! Are there any plans to add support for formats other than SSD?

Side question, any idea why person-detection-0201 would not work, since that's also an SSD based model?

aeozyalcin avatar Jan 22 '23 05:01 aeozyalcin

I notice the person detect model has a label always of 0 for person. Try passing in a new label map with just one line:

person

NateMeyer avatar Jan 22 '23 05:01 NateMeyer

Ok I have some updates. It turns out that the person-detection-0201 model also requires nchw format. Applying this change made that model work!

As for yolox, your suggestion to change to nchw got us further. As expected, the detection process now fails due to the yolo output format not matching the expected SSD output format.

This is now the error:

2023-01-22 17:54:46.406977407  [2023-01-22 09:54:46] frigate.detectors.plugins.openvino INFO    : Model Input Shape: {1, 3, 416, 416}
2023-01-22 17:54:46.407103750  [2023-01-22 09:54:46] frigate.detectors.plugins.openvino INFO    : Model Output-0 Shape: {1, 3549, 85}
2023-01-22 17:54:46.407195458  [2023-01-22 09:54:46] frigate.detectors.plugins.openvino INFO    : Model has 1 Output Tensors
2023-01-22 17:54:46.467231402  Process detector:ov:
2023-01-22 17:54:46.468384337  Traceback (most recent call last):
2023-01-22 17:54:46.468405707    File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
2023-01-22 17:54:46.468407912      self.run()
2023-01-22 17:54:46.468409626    File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run
2023-01-22 17:54:46.468411126      self._target(*self._args, **self._kwargs)
2023-01-22 17:54:46.468412847    File "/opt/frigate/frigate/object_detection.py", line 120, in run_detector
2023-01-22 17:54:46.468414375      detections = object_detector.detect_raw(input_frame)
2023-01-22 17:54:46.468428917    File "/opt/frigate/frigate/object_detection.py", line 71, in detect_raw
2023-01-22 17:54:46.468431362      return self.detect_api.detect_raw(tensor_input=tensor_input)
2023-01-22 17:54:46.468433102    File "/opt/frigate/frigate/detectors/plugins/openvino.py", line 52, in detect_raw
2023-01-22 17:54:46.468434371      if object_detected[0] != -1:
2023-01-22 17:54:46.468449474  IndexError: invalid index to scalar variable.
2023-01-22 17:54:47.976514363  [2023-01-22 09:54:47] frigate.object_processing      INFO    : Exiting object processor...

Are there any plans to add ability to pass model type to the OV detector in Frigate, similar to how OpenVINO object_detection.py demo does?

-at, --architecture_type  Required. Specify model' architecture type. Valid values are {centernet,detr,ctpn,faceboxes,nanodet,nanodet-plus,retinaface,retinaface-pytorch,ssd,ultra_lightweight_face_detection,yolo,yolov4,yolof,yolox,yolov3-onnx}.

aeozyalcin avatar Jan 22 '23 20:01 aeozyalcin

I'm glad to hear you got the person-detect model working.

I have no plans right now to work on adding other model types to the detector, but that is certainly something that could done.

NateMeyer avatar Jan 22 '23 21:01 NateMeyer

I modified openvino.py in the plugins folder to accommodate a yolo model, but I am unfortunately a noob when it comes to deploying/building docker containers. I committed my changes to the frigate image and saved it as a local docker image, but was unable to run it. I get this error, would love some pointers:

sudo docker run -d   --name frigate-yolo   --restart=unless-stopped   --mount type=tmpfs,target=/tmp/cache,tmpfs-size=1000000000   --device /dev/dri/renderD128   --shm-size=64m   -v /usr/share/hassio/media:/media   -v /usr/share/hassio/homeassistant:/config   -v /etc/localtime:/etc/localtime:ro   -e CONFIG_FILE='/config/frigate.yml'   -p 5000:5000   -p 8554:8554   -p 8555:8555/tcp   -p 8555:8555/udp frigate_yolo
8b0f60aa2621ca6c86f3901c0c44b5263ee33158f4241a78bb8dfb84ea15bc9b
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: unable to setup user: stat /dev/null: no such file or directory: unknown.

aeozyalcin avatar Jan 26 '23 02:01 aeozyalcin

@aeozyalcin mind sharing your config for person-detection-0201 ? I can't get it to work myself

alez007 avatar Jan 26 '23 15:01 alez007

@aeozyalcin whenever I got an error like that, it was because I already had a frigate container that wasn't removed.

The docker image is fairly painless to build. Fork/clone the dev branch of the repo and run make local to generate a frigate:latest image you can use on your machine.

NateMeyer avatar Jan 27 '23 03:01 NateMeyer

I have successfully implemented the yolox_tiny model! It was a bit of a PIA but it works haha. I would love to make a PR to check this in, but have never done so on Github before. Any tips appreciated.

@alez007 like Nate said, all I had to do was to change the input_tensor type to nchw. You will also need to change the image size to 384x384. It's 300x300 by default for the included ssd model.

aeozyalcin avatar Jan 28 '23 05:01 aeozyalcin

I have successfully implemented the yolox_tiny model! It was a bit of a PIA but it works haha. I would love to make a PR to check this in, but have never done so on Github before. Any tips appreciated.

@alez007 like Nate said, all I had to do was to change the input_tensor type to nchw. You will also need to change the image size to 384x384. It's 300x300 by default for the included ssd model.

Can you share your experience between the 'default' ssdlite_mobilenet_v2 model versus person-detection-0201? If you use only person detection, is this model then better? And especially in dark environments with IR enabled?

Caros2017 avatar Jan 28 '23 17:01 Caros2017

I have successfully implemented the yolox_tiny model! It was a bit of a PIA but it works haha. I would love to make a PR to check this in, but have never done so on Github before. Any tips appreciated.

@alez007 like Nate said, all I had to do was to change the input_tensor type to nchw. You will also need to change the image size to 384x384. It's 300x300 by default for the included ssd model.

Can you share your experience between the 'default' ssdlite_mobilenet_v2 model versus person-detection-0201? If you use only person detection, is this model then better? And especially in dark environments with IR enabled?

I am actually positively surprised at how well the ssdlite_mobilenet_v2 model is performing with OpenVINO. I am running it on the Intel iGPU, and coming from a Pi+Coral setup, it definitely performs better in terms of accuracy.

Regarding the person-detection-0201 model, I briefly ran it, and found that it was a bit 'trigger happy'. It was falsely picking up non person subjects. I'm sure it could be turned out, but I didn't bother.

I'm currently running YOLOx-tiny inside Frigate as of last night. I'll report back on how it performs. It should be better than ssdlite_mobilenet_v2 in theory based on the mAP.

aeozyalcin avatar Jan 28 '23 21:01 aeozyalcin

For those interested, I am working on a PR to integrate yolox support into Frigate. If anyone wants to try it, give it a go. It's 99% there. https://github.com/aeozyalcin/frigate/tree/openvino_yolox

aeozyalcin avatar Jan 29 '23 06:01 aeozyalcin

For those interested, I am working on a PR to integrate yolox support into Frigate. If anyone wants to try it, give it a go. It's 99% there. https://github.com/aeozyalcin/frigate/tree/openvino_yolox

Wow really nice. Appreciate the work. Wil test it once merged and released in Beta. Also wondering what your experience is between the models:) But that will of course take some time.

Caros2017 avatar Jan 29 '23 09:01 Caros2017

Any guide/details on what omz_downloader config you used for processing the model? I'm keen to use it to solve/test some of the false positives I was having with ssdlite.

nickp27 avatar Feb 04 '23 01:02 nickp27

https://github.com/blakeblackshear/frigate/pull/5285 All merged and good to go. Closing the issue. Now I am taking a crack at adding support for YOlOv5, but no promises.

aeozyalcin avatar Feb 04 '23 01:02 aeozyalcin

Any guide/details on what omz_downloader config you used for processing the model? I'm keen to use it to solve/test some of the false positives I was having with ssdlite.

Try these: omz_downloader --name yolox_tiny omz_converter --name yolox_tiny

aeozyalcin avatar Feb 04 '23 01:02 aeozyalcin

Very nice work @aeozyalcin. Looking forward to the new beta to try this.

How are your experiences so far between yolox_tiny and ssdlite_mobilenet_v2 ?

Caros2017 avatar Feb 04 '23 12:02 Caros2017

Very nice work @aeozyalcin. Looking forward to the new beta to try this.

How are your experiences so far between yolox_tiny and ssdlite_mobilenet_v2 ?

I am liking YOLOx so far! Because the model input is 416*416, it's able to detect smaller objects. But of course that means that it can also have more false positives depending on the scene.

I have an issue open on the YOLOx repo to see if anyone will help me with transfer learning/fine tuning the model to make it even more effective in my application.

aeozyalcin avatar Feb 06 '23 07:02 aeozyalcin

Very nice work @aeozyalcin. Looking forward to the new beta to try this. How are your experiences so far between yolox_tiny and ssdlite_mobilenet_v2 ?

I am liking YOLOx so far! Because the model input is 416*416, it's able to detect smaller objects. But of course that means that it can also have more false positives depending on the scene.

I have an issue open on the YOLOx repo to see if anyone will help me with transfer learning/fine tuning the model to make it even more effective in my application.

Doesn't the bigger sample size make it harder to detect objects? I have found YoloX to be more reliable at picking up my dog, but it's been running for 2 days and hasn't picked up a person once (so on my 720 x 576 substream, my guess had been that 416 x 416 rather than mobilenet's 300 x 300 was the cause).

nickp27 avatar Feb 06 '23 10:02 nickp27

Very nice work @aeozyalcin. Looking forward to the new beta to try this. How are your experiences so far between yolox_tiny and ssdlite_mobilenet_v2 ?

I am liking YOLOx so far! Because the model input is 416*416, it's able to detect smaller objects. But of course that means that it can also have more false positives depending on the scene. I have an issue open on the YOLOx repo to see if anyone will help me with transfer learning/fine tuning the model to make it even more effective in my application.

Doesn't the bigger sample size make it harder to detect objects? I have found YoloX to be more reliable at picking up my dog, but it's been running for 2 days and hasn't picked up a person once (so on my 720 x 576 substream, my guess had been that 416 x 416 rather than mobilenet's 300 x 300 was the cause).

I think that 416*416 is pretty large compared to the whole area in your substream. Have you tried to compare with a higher resolution in the substream or for the test, try the main stream?

Caros2017 avatar Feb 06 '23 19:02 Caros2017

Exactly. I am running a 1080p stream through the model. I found that Frigate will not "zoom in" the section of motion and upscale it to the model input dimensions. Instead, the minimum region it will send is the model input dimensions. For example, for a 720p stream with this 416x416 model, each input frame would be roughly 1/3 of the whole frame.

aeozyalcin avatar Feb 06 '23 20:02 aeozyalcin

First of all, thanks for the PR @aeozyalcin , good stuff 🦾

I've been running yolox_tiny for a couple of days now and comparing it with ssdlite (default one). Basically my setup is detect at 1080p 4/3 in both scenarios and what I noticed is that yolox_tiny is returning smaller percentages than ssdlite for objects in the same positions with pretty much the same luminosity and setup.

Can provide examples but in general if ssdlite thinks there's 90% chance to have a person, yolox_tiny is around 7-8% less than that plus it gives me more false positives.

Will continue testing but wondering if there is a logical explanation for these smaller detect percentages, taking into account that the model is bigger for yolox but the detect resolution remained the same in both scenarios. Would it be worth it to increase the detect resolution ?

alez007 avatar Feb 09 '23 09:02 alez007

Any guide/details on what omz_downloader config you used for processing the model? I'm keen to use it to solve/test some of the false positives I was having with ssdlite.

Try these: omz_downloader --name yolox_tiny omz_converter --name yolox_tiny

@nickp27 a few more packages needed to be installed for me (slackware 15) for omz_converter to work, one of them being torch; just pip install all of them and it will work.

alez007 avatar Feb 09 '23 09:02 alez007

First of all, thanks for the PR @aeozyalcin , good stuff 🦾

I've been running yolox_tiny for a couple of days now and comparing it with ssdlite (default one). Basically my setup is detect at 1080p 4/3 in both scenarios and what I noticed is that yolox_tiny is returning smaller percentages than ssdlite for objects in the same positions with pretty much the same luminosity and setup.

Can provide examples but in general if ssdlite thinks there's 90% chance to have a person, yolox_tiny is around 7-8% less than that plus it gives me more false positives.

Will continue testing but wondering if there is a logical explanation for these smaller detect percentages, taking into account that the model is bigger for yolox but the detect resolution remained the same in both scenarios. Would it be worth it to increase the detect resolution ?

The scores depend on how the model is trained, weights, etc and the actual size of the image sent to the model (I don't think) affects thst.

For ANY different model you will want to adjust the threshold and min_score values as frigate has those tuned just for the default coral model. That will greatly help in working with the new model to have less false positives

NickM-27 avatar Feb 09 '23 12:02 NickM-27

Any guide/details on what omz_downloader config you used for processing the model? I'm keen to use it to solve/test some of the false positives I was having with ssdlite.

Try these: omz_downloader --name yolox_tiny omz_converter --name yolox_tiny

@nickp27 a few more packages needed to be installed for me (slackware 15) for omz_converter to work, one of them being torch; just pip install all of them and it will work.

Installing openvino-dev should also install the omz* tools.

First of all, thanks for the PR @aeozyalcin , good stuff 🦾

I've been running yolox_tiny for a couple of days now and comparing it with ssdlite (default one). Basically my setup is detect at 1080p 4/3 in both scenarios and what I noticed is that yolox_tiny is returning smaller percentages than ssdlite for objects in the same positions with pretty much the same luminosity and setup.

Can provide examples but in general if ssdlite thinks there's 90% chance to have a person, yolox_tiny is around 7-8% less than that plus it gives me more false positives.

Will continue testing but wondering if there is a logical explanation for these smaller detect percentages, taking into account that the model is bigger for yolox but the detect resolution remained the same in both scenarios. Would it be worth it to increase the detect resolution ?

If you would like to share your config, I can take a look at it. I want to see the config of the camera in question, and how you have the openvino detector defined in your yaml.

aeozyalcin avatar Feb 10 '23 04:02 aeozyalcin

Any guide/details on what omz_downloader config you used for processing the model? I'm keen to use it to solve/test some of the false positives I was having with ssdlite.

Try these: omz_downloader --name yolox_tiny omz_converter --name yolox_tiny

@nickp27 a few more packages needed to be installed for me (slackware 15) for omz_converter to work, one of them being torch; just pip install all of them and it will work.

Installing openvino-dev should also install the omz* tools.

First of all, thanks for the PR @aeozyalcin , good stuff 🦾 I've been running yolox_tiny for a couple of days now and comparing it with ssdlite (default one). Basically my setup is detect at 1080p 4/3 in both scenarios and what I noticed is that yolox_tiny is returning smaller percentages than ssdlite for objects in the same positions with pretty much the same luminosity and setup. Can provide examples but in general if ssdlite thinks there's 90% chance to have a person, yolox_tiny is around 7-8% less than that plus it gives me more false positives. Will continue testing but wondering if there is a logical explanation for these smaller detect percentages, taking into account that the model is bigger for yolox but the detect resolution remained the same in both scenarios. Would it be worth it to increase the detect resolution ?

If you would like to share your config, I can take a look at it. I want to see the config of the camera in question, and how you have the openvino detector defined in your yaml.

hey, yes, sorry for the late reply, here it is:

mqtt:
  host: mqtt
  port: 1883
  topic_prefix: frigate012
  client_id: frigate012
database:
  path: /config/frigate.db
logger:
  default: info
  logs:
    frigate.event: debug
detectors:
  ov:
    type: openvino
    device: CPU
    model:
      path: /frigate_models/yolox_tiny/public/yolox-tiny/FP32/yolox-tiny.xml

model:
  width: 416
  height: 416
  input_tensor: nchw
  input_pixel_format: bgr
  model_type: yolox
  labelmap_path: /frigate_models/dataset/coco_80cl.txt
ffmpeg:
  hwaccel_args: preset-vaapi
  input_args: preset-http-reolink
  output_args:
    record: preset-record-generic-audio-aac
detect:
  width: 1440
  height: 1080
  fps: 7
  enabled: True
motion:
  improve_contrast: True

objects:
  track:
    - person
    - cat
  filters:
    person:
      min_ratio: 0.3
      max_ratio: 0.8
      min_score: 0.75
      threshold: 0.85
    cat:
      min_score: 0.70
      threshold: 0.80
      max_area: 70000

record:
  enabled: True
  expire_interval: 60
  retain:
    days: 10
    mode: all
  events:
    pre_capture: 5
    post_capture: 5
    objects:
      - person
      - cat
    required_zones: []
    retain:
      default: 15
      mode: active_objects
      objects:
        person: 20

snapshots:
  enabled: True
  clean_copy: True
  timestamp: False
  bounding_box: True
  crop: False
  #height: 175
  required_zones: []
  retain:
    default: 15
    objects:
      person: 20

go2rtc:
  streams:
    camera1:
      - "ffmpeg:http://192.168.1.9/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=admin&password="
      - ffmpeg:camera1#audio=opus

cameras:
  camera1:
    detect:
      enabled: True
    record:
      enabled: True
    motion:
      mask:
        - 828,104,1440,149,1440,0,0,0,0,367
    ffmpeg:
      inputs:
        - path: rtsp://127.0.0.1:8554/camera1?video=copy&audio=aac
          input_args: preset-rtsp-restream
          roles:
            - record
            - detect
            - restream
    objects:
      filters:
        person:
          min_area: 15000

alez007 avatar Feb 12 '23 03:02 alez007

Very interesting. Nothing stands out to me as something that could be causing the poor performance. I am running the detector on a 720p/1080p stream, so it's in the similar resolution range as what you are running yours at. The only thing I can think of is maybe the subjects are too far, but I wouldn't be able to compare to mine without seeing a picture from your camera.

Few unrelated remarks, I see you are using the CPU detector with FP32 precision. Is there a reason why you aren't using FP16 with the GPU detector instead? I assume you have an Intel CPU, so you probably also have an Intel integrated GPU?

PS: I just got yolov5 working in Frigate with OpenVino, and it's working really promising so far. I will create a PR when I feel it's ready. I love how flexible Yolov5 is.

aeozyalcin avatar Feb 13 '23 01:02 aeozyalcin

Very interesting. Nothing stands out to me as something that could be causing the poor performance. I am running the detector on a 720p/1080p stream, so it's in the similar resolution range as what you are running yours at. The only thing I can think of is maybe the subjects are too far, but I wouldn't be able to compare to mine without seeing a picture from your camera.

thanks for checking 👍

Few unrelated remarks, I see you are using the CPU detector with FP32 precision. Is there a reason why you aren't using FP16 with the GPU detector instead? I assume you have an Intel CPU, so you probably also have an Intel integrated GPU?

It's unrelated to frigate, I got a problem with kernel 5.19 sometimes GPU is hanging and until I have time to revert to 5.15, I use CPU. Regarding 32 vs 16 precision, I'm not sure why I chose 32 tbh, probably because my lack of knowledge around what to use in what circumstances and laziness to google it. You think I should make it 16 ?

PS: I just got yolov5 working in Frigate with OpenVino, and it's working really promising so far. I will create a PR when I feel it's ready. I love how flexible Yolov5 is.

fantastic work 🥳 looking forward to trying it out 🦾

alez007 avatar Feb 13 '23 10:02 alez007

Very interesting. Nothing stands out to me as something that could be causing the poor performance. I am running the detector on a 720p/1080p stream, so it's in the similar resolution range as what you are running yours at. The only thing I can think of is maybe the subjects are too far, but I wouldn't be able to compare to mine without seeing a picture from your camera.

Few unrelated remarks, I see you are using the CPU detector with FP32 precision. Is there a reason why you aren't using FP16 with the GPU detector instead? I assume you have an Intel CPU, so you probably also have an Intel integrated GPU?

PS: I just got yolov5 working in Frigate with OpenVino, and it's working really promising so far. I will create a PR when I feel it's ready. I love how flexible Yolov5 is.

Really wondering what's the promising part about yolov5 versus yolox-tiny. Does it perform better? Is it more accurate?

Ps. Have you seen that they just released yolov8 ;)

Caros2017 avatar Feb 13 '23 19:02 Caros2017