frigate
frigate copied to clipboard
[FR] Face recognition
Is there any change to implement face recognition? Atm i have shinoby running for that, but it really outdatet and lack of features. Having face recognition build into frigate would be awesome!
thanks
https://missinglink.ai/guides/tensorflow/tensorflow-face-recognition-three-quick-tutorials/ https://github.com/search?q=Face+Recognition+Tensorflow
It's possible, but probably not going to be a focus in the near future.
How about sending the detected person image from MQTT to a second project like https://github.com/JanLoebel/face_recognition using something like NodeRed to glue it all together? Only suggest it because I'm going to look at doing the same thing. Haven't evaluated that project yet so there may be better ones out there.
This may work. But I'm using the face recognition to open our front door. Even with let handling Shinobi this directly, the system need sometime several seconds. I'm afraid, the adding one more step, the daily gets worse.
Actually I want to have only one tool for all the work (motion, people, faces), that's why I asked in the first place. ;)
So frigate already accepts custom models and there are several tflite ones for facial recognition. IMHO If you are able to cross-train a model with your faces this should already work with the current code. Instead of person or car you would have the persons you trained as object label designations. As I assume that you want to open the door not to any human face some training would need to be done in any case which would, given the limitations of e.g. the coral detector, be needed to be carried out on another preferably GPU powered platform anyway.
I have never trained my own custom model, but I like the idea. Maybe i give that a try. Any tips where to start?
The nice thing about Shinobi is, that YOLO, face-rec, and FFmpeg running on GPU inside an unraid docker container. But I also have a Google coral.
So a good starting point would be actually the coral.ai homepage itself. Have a look at https://coral.ai/examples/. There's also examples for cross-training for object recognition to get everything set up and familiarise yourself with the topic.
For the face recognition part I had some success with with this tutorial, which is for Tensorflow (GPU/CPU) and would need to be converted to be able to run on the Coral (TFlite format). It's been a while since I looked into this, but seems like people got mobilefacenet to run on the coral so it's possible.
Thanks a lot! Will give this a try on the weekend.
@corgan2222 Did you have a chance to try to train your own model for face recognition? Can you share how to do it?
Also interested. But I never really tried to train a net, probably 'cause lack of time. :(
@corgan2222 Did you have a chance to try to train your own model for face recognition? Can you share how to do it?
No, but I didn't tried after looked into it. :(
Still very interested in face recognition. Is there any chance on getting this kind of feature?
Definitely. I will be looking at this with custom models soon.
Sounds great! I have found a really nice tutorial which explains how to easily create a TFLite Edge Model. https://teachablemachine.withgoogle.com
I now have a trained model, but I'm not sure about the existing models. Do i have to combine it with the existing one to keep detecting of persons/cars ect. ?
Yes
I tried the last night, but I'm stuck and have no clue how to solve it.
What have I'm done so far: Windows 10, wsl2
-
created a model with with 6 different images sets with 50-150 images on the website the model runs fine with a webcam
-
download the tensorflow lite edge tpu model
-
installed rtsp-simple-server
-
compiled ffmpeg
-
rendered some videos with 1fps on saved images for testing
ffmpeg -framerate 1 -pattern_type glob -i '*.jpg' -c:v libx264 -r 30 front.mp4
-
send the video to the rtsp server with ffmpeg
ffmpeg -re -stream_loop -1 -i .\front.mp4 -c copy -f rtsp rtsp://localhost:8559/mystream
-
installed frigate with docker compose on my windows machine and connect to the rtsp server with the default model everything runs fine, even on cpu
Then I changed the docker compose file to mount the new model and the labelmap frigate comes up, showed some images and then crashed. No clue what the error Message means.
docker-compose
version: '3.9'
services:
frigate:
container_name: frigate
privileged: true # this may not be necessary for all setups
restart: unless-stopped
image: blakeblackshear/frigate:stable-amd64
devices:
- /dev/bus/usb:/dev/bus/usb
#- /dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
volumes:
- /etc/localtime:/etc/localtime:ro
- ./config/config.yml:/config/config.yml:ro
- ./data:/media/frigate
- ./model_edgetpu.tflite:/model_edgetpu.tflite
- ./labelmap.txt:/labelmap.txt
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 1000000000
ports:
- '5000:5000'
- '1935:1935' # RTMP feeds
environment:
FRIGATE_RTSP_PASSWORD: 'password'
Config
mqtt:
host: 192.168.2.120
topic_prefix: frigate
user: broker
client_id: frigate_test
detectors:
coral:
type: edgetpu
device: usb
ffmpeg:
# Optional: global ffmpeg args (default: shown below)
global_args: -hide_banner -loglevel warning
# Optional: global hwaccel args (default: shown below)
# NOTE: See hardware acceleration docs for your specific device
hwaccel_args: []
# Optional: global input args (default: shown below)
input_args: -avoid_negative_ts make_zero -fflags +genpts+discardcorrupt -rtsp_transport tcp -stimeout 5000000 -use_wallclock_as_timestamps 1
# Optional: global output args
output_args:
# Optional: output args for detect streams (default: shown below)
detect: -f rawvideo -pix_fmt yuv420p
# Optional: output args for record streams (default: shown below)
record: -f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an
# Optional: output args for clips streams (default: shown below)
clips: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an
# Optional: output args for rtmp streams (default: shown below)
rtmp: -c copy -f flv
cameras:
front:
ffmpeg:
inputs:
#- path: rtsp://192.168.2.91:80/live/stream
- path: rtsp://192.168.2.30:8559/mystream
roles:
- detect
#- rtmp
width: 640
height: 480
fps: 20
#mask: poly,501,90,399,469,4,470,13,171,114,155,221,89,292,63,422,63,462,86
# Optional: camera level motion config
motion:
# Optional: motion mask
# NOTE: see docs for more detailed info on creating masks
mask:
- 507,0,507,72,397,97,287,43,145,105,115,0
# Optional: Camera level detect settings
detect:
# Optional: enables detection for the camera (default: True)
# This value can be set via MQTT and will be updated in startup based on retained value
enabled: True
# Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
max_disappeared: 25
# Optional: save clips configuration
clips:
# Required: enables clips for the camera (default: shown below)
# This value can be set via MQTT and will be updated in startup based on retained value
enabled: false
# Optional: Number of seconds before the event to include in the clips (default: shown below)
pre_capture: 5
# Optional: Number of seconds after the event to include in the clips (default: shown below)
post_capture: 5
# Optional: Objects to save clips for. (default: all tracked objects)
objects:
#- person
- Stefan
- Steffi
- Ursel
- Leonas
- DHL
# Optional: Restrict clips to objects that entered any of the listed zones (default: no required zones)
required_zones: []
# Optional: Camera override for retention settings (default: global values)
retain:
# Required: Default retention days (default: shown below)
default: 10
# Optional: Per object retention days
objects:
person: 15
# Optional: 24/7 recording configuration
record:
# Optional: Enable recording (default: global setting)
enabled: false
# Optional: Number of days to retain (default: global setting)
retain_days: 2
# Optional: RTMP re-stream configuration
rtmp:
# Required: Enable the live stream (default: True)
enabled: false
# Optional: Configuration for the jpg snapshots published via MQTT
mqtt:
# Optional: Enable publishing snapshot via mqtt for camera (default: shown below)
# NOTE: Only applies to publishing image data to MQTT via 'frigate/<camera_name>/<object_name>/snapshot'.
# All other messages will still be published.
enabled: false
# Optional: print a timestamp on the snapshots (default: shown below)
timestamp: True
# Optional: draw bounding box on the snapshots (default: shown below)
bounding_box: True
# Optional: crop the snapshot (default: shown below)
crop: false
# Optional: height to resize the snapshot to (default: shown below)
height: 640
# Optional: Restrict mqtt messages to objects that entered any of the listed zones (default: no required zones)
required_zones: []
# Optional: Camera level object filters config.
objects:
track:
- person
#- dog
# Optional: mask to prevent all object types from being detected in certain areas (default: no mask)
# Checks based on the bottom center of the bounding box of the object.
# NOTE: This mask is COMBINED with the object type specific mask below
#mask: 0,0,1000,0,1000,200,0,200
# filters:
# person:
# min_area: 5000
# max_area: 100000
# min_score: 0.5
# threshold: 0.7
# # Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
# # Checks based on the bottom center of the bounding box of the object
# mask: 507,0,507,72,397,97,287,43,145,105,115,0
# Optional: Configuration for the jpg snapshots written to the clips directory for each event
snapshots:
# Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below)
# This value can be set via MQTT and will be updated in startup based on retained value
enabled: true
# Optional: print a timestamp on the snapshots (default: shown below)
timestamp: true
# Optional: draw bounding box on the snapshots (default: shown below)
bounding_box: False
# Optional: crop the snapshot (default: shown below)
crop: False
# Optional: height to resize the snapshot to (default: original size)
height: 640
# Optional: Restrict snapshots to objects that entered any of the listed zones (default: no required zones)
required_zones: []
# Optional: Camera override for retention settings (default: global values)
retain:
# Required: Default retention days (default: shown below)
default: 10
# Optional: Per object retention days
objects:
person: 15
# Optional: Global ffmpeg args
# "ffmpeg" + global_args + input_args + "-i" + input + output_args
ffmpeg:
# Optional: global ffmpeg args (default: shown below)
global_args:
- -hide_banner
- -loglevel
- panic
# Optional: global hwaccel args (default: shown below)
# NOTE: See hardware acceleration docs for your specific device
hwaccel_args: []
# Optional: global input args (default: shown below)
input_args:
- -avoid_negative_ts
- make_zero
- -fflags
- nobuffer
- -flags
- low_delay
- -strict
- experimental
- -fflags
- +genpts+discardcorrupt
- -rtsp_transport
- tcp
- -stimeout
- '5000000'
- -use_wallclock_as_timestamps
- '1'
# Optional: global output args (default: shown below)
# output_args:
# - -f
# - rawvideo
# - -pix_fmt
# - yuv420p
Docker Log
- Starting nginx nginx ...done. Starting migrations peewee_migrate INFO : Starting migrations There is nothing to migrate peewee_migrate INFO : There is nothing to migrate frigate.mqtt INFO : MQTT connected detector.coral INFO : Starting detection process: 92 frigate.edgetpu INFO : Attempting to load TPU as usb Process detector:coral: frigate.edgetpu INFO : No EdgeTPU detected. Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 152, in load_delegate delegate = Delegate(library, options) File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 111, in init raise ValueError(capture.message) ValueError
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/opt/frigate/frigate/edgetpu.py", line 124, in run_detector object_detector = LocalObjectDetector(tf_device=tf_device, num_threads=num_threads) File "/opt/frigate/frigate/edgetpu.py", line 63, in init edge_tpu_delegate = load_delegate('libedgetpu.so.1.0', device_config) File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 154, in load_delegate raise ValueError('Failed to load delegate from {}\n{}'.format( ValueError: Failed to load delegate from libedgetpu.so.1.0
frigate.app INFO : Camera processor started for front: 97 frigate.app INFO : Capture process started for front: 98 frigate.watchdog INFO : Detection appears to have stopped. Exiting frigate... frigate.app INFO : Stopping... frigate.record INFO : Exiting recording maintenance... frigate.events INFO : Exiting event processor... frigate.object_processing INFO : Exiting object processor... frigate.events INFO : Exiting event cleanup... frigate.watchdog INFO : Exiting watchdog... frigate.stats INFO : Exiting watchdog... peewee.sqliteq INFO : writer received shutdown request, exiting. root INFO : Waiting for detection process to exit gracefully... frigate.video INFO : front: exiting subprocess
- Starting nginx nginx ...done. Starting migrations peewee_migrate INFO : Starting migrations There is nothing to migrate peewee_migrate INFO : There is nothing to migrate frigate.mqtt INFO : MQTT connected detector.coral INFO : Starting detection process: 90 frigate.edgetpu INFO : Attempting to load TPU as usb Process detector:coral: frigate.edgetpu INFO : No EdgeTPU detected. Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 152, in load_delegate delegate = Delegate(library, options) File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 111, in init raise ValueError(capture.message) ValueError
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/opt/frigate/frigate/edgetpu.py", line 124, in run_detector object_detector = LocalObjectDetector(tf_device=tf_device, num_threads=num_threads) File "/opt/frigate/frigate/edgetpu.py", line 63, in init edge_tpu_delegate = load_delegate('libedgetpu.so.1.0', device_config) File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 154, in load_delegate raise ValueError('Failed to load delegate from {}\n{}'.format( ValueError: Failed to load delegate from libedgetpu.so.1.0
frigate.app INFO : Camera processor started for front: 93 frigate.app INFO : Capture process started for front: 96 frigate.watchdog INFO : Detection appears to have stopped. Exiting frigate... frigate.app INFO : Stopping... frigate.record INFO : Exiting recording maintenance... frigate.object_processing INFO : Exiting object processor... frigate.events INFO : Exiting event cleanup... frigate.events INFO : Exiting event processor... frigate.watchdog INFO : Exiting watchdog... frigate.stats INFO : Exiting watchdog... peewee.sqliteq INFO : writer received shutdown request, exiting. root INFO : Waiting for detection process to exit gracefully...
- Starting nginx nginx ...done. Starting migrations peewee_migrate INFO : Starting migrations There is nothing to migrate peewee_migrate INFO : There is nothing to migrate frigate.mqtt INFO : MQTT connected detector.coral INFO : Starting detection process: 91 frigate.edgetpu INFO : Attempting to load TPU as usb frigate.app INFO : Camera processor started for front: 94 frigate.edgetpu INFO : No EdgeTPU detected. Process detector:coral: frigate.app INFO : Capture process started for front: 97 Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 152, in load_delegate delegate = Delegate(library, options) File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 111, in init raise ValueError(capture.message) ValueError
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/opt/frigate/frigate/edgetpu.py", line 124, in run_detector object_detector = LocalObjectDetector(tf_device=tf_device, num_threads=num_threads) File "/opt/frigate/frigate/edgetpu.py", line 63, in init edge_tpu_delegate = load_delegate('libedgetpu.so.1.0', device_config) File "/usr/local/lib/python3.8/dist-packages/tflite_runtime/interpreter.py", line 154, in load_delegate raise ValueError('Failed to load delegate from {}\n{}'.format( ValueError: Failed to load delegate from libedgetpu.so.1.0
frigate.watchdog INFO : Detection appears to have stopped. Exiting frigate... frigate.app INFO : Stopping... frigate.object_processing INFO : Exiting object processor... frigate.events INFO : Exiting event cleanup... frigate.events INFO : Exiting event processor... frigate.watchdog INFO : Exiting watchdog... frigate.record INFO : Exiting recording maintenance... frigate.app INFO : Stopping...
converted_tflite_quantized.zip converted_tflite.zip converted_edgetpu(1).zip
here are the models for TSLite.
Custom models must match the input and output shapes that frigate expects. You need to compare your model to the one frigate uses by default to see what is different.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
@corgan2222
frigate.edgetpu INFO : No EdgeTPU detected.
I had those error when trying this, it doesn't appear to detect/find the coral TPU, I don't have one but it went away as soon as I told it to use the CPU
converted_tflite_quantized.zip converted_tflite.zip converted_edgetpu(1).zip
here are the models for TSLite.
Did you ever get this working. I'm interested to do something similar. Thanks
I did not try this again since this post mainly because of a lack of time. I hope that @blakeblackshear implement some kind of face rec support sometimes.
An interesting project based on frigate is https://github.com/jakowenko/double-take . I don't know if blakeblackshear is aware. I'm testing it on jetson nano. P.s. 1. frigate is a great product, my compliments, @blakeblackshear. Thanks. P.s. 2. I think that https://github.com/ageitgey/face_recognition would be a great expansion for frigate.
P.s. 2. I think that https://github.com/ageitgey/face_recognition would be a great expansion for frigate.
100%
I had a quick look at how double-take works, I was going to do something similar myself with Azure Face Recognition and Node-Red but decided not to bother in the end as we both will have the same issue, that the snapshot that Frigate sends may not be optimised for facial recogntion. Frigate is sending you the first and subsequent higher percentage "person" detections.
Quite often part of my porch door hides a persons face, or capture is from too far away on my garage camera and would need to wait until they got closer.
It's obviously better than nothing, but it would be better to get something that could listen to the person detected event then try and do it's own face processing on the video feed rather than the snapshot so it can get the best image for faces.
Personally I use the person motion detection to trigger a Home Assistant integration that calls Deepstack Face which is GPU accelerated. With the current shortage of Edge TPUs I'll keep FR outside of Frigate and will try to help on the GPU side of things as Frigate is now.
The benefit of doing this directly in frigate will be efficiency for the same reasons I combine motion detection and object detection in the same process.
With something like double take, the image is converted from raw pixel data to jpeg, transmitted over the network, decoded from jpeg to raw pixel data again, copied into a new memory location, and then face recognition is done. When this is added directly to frigate, the face recognition will leverage the already decoded raw pixel data and cut out all that extra processing.
I will also be able to add faces to the object detection models to ensure I get the best face snapshot. Once I recognize a face, I will be able to connect it to the tracked object and the event itself. This allows me to stop trying to recognize faces for already identified person objects and use multiple consecutive frames to combine scores and get better accuracy.
You are absolute correct @blakeblackshear.
This means you are about to implement face recognition into frigate at some time? This would be awesome! If you need beta testers just yell! :)
Same. I am desperate for Face recognition that works. I am in need of it for our indoor home camera setup. The use case it to keep an eye on an very elderly man with many health issues. Having Face recognition would cut out a fudge ton of alerts from the home cameras as it sees us walking about.
I am happy to beta test.
Ohhh i am so looking forward for that, maybe i would try automated attendance for our employye access. No entrance password,noNFC tags,no fingerprint. Just blink and smile xD
This would be amazing!
I am currently using Facebox and node-red on top of Frigate for face recognition when it detects a person. But this is very hit and miss, since it doesn't look for a face, but a person!