Ability to submit specific frames to Frigate+
Describe what you are trying to accomplish and why in non technical terms I want to be able to report false negatives to Frigate+ so that I can contribute them in order to improve the generated models.
Describe the solution you'd like I'm discovering that the current models sometimes doesn't detect people, specially in the night. At this moment we can report positives and false positives, but there's no easy way to capture and report a false negative from a stream where Frigate didn't detect an object.
Maybe we could have a button to take a snapshot from a video and report it to Frigate+.
Describe alternatives you've considered We could do it manually, but it's a bit cumbersome.
I've been thinking about this off and on, I would really like this feature as well. I would also like the ability to create NEW labels for things like rabbits, coyotes, bobcat and javelina.
I was looking at the MQTT data and noticed that it is not currently pushing motion bounding boxes, I'm thinking of extending the code to push those (I know this is going to be very verbose) but it would help me with a pipeline to do the following:
automated processing
- Listen for motion on camera(s)
- Compute background image for camera using opencv
BackgroundSubtractorMOGor similar and extract the background model. - Crop the motion bounding box(es) and compute structural similarity between the motion frame and the background model. (for the crops)
- If the difference is greater than some defined threshold add to a queue of images to manually review.
manual review
- Review the images and determine the label + adjust bounding box (either from the motion box or contour areas from the structural similarity check)
- Optionally use google vision search for reverse image lookup (free for first 1,000 requests a month and $3.50 per 1,000 after that I believe.)
- This would reduce manual labor
- Could also use the embedding of the crop and do a similarity search using an internal lookup to group similar images for labeling...
Just some thoughts, would interesting to hear feedback.
edit: it might be fun to do a dating style swipe left/right to filter out noise vs something important as well for the manual review process. the "noise" (aka swipe left) could be used to adjust the threshold value for similarity search as well
Suggestion: When creating a custom model, I often ask my kid/wife to go stand in a specific area where I get a lot of false positives, so I can train it on a true positive. Once they're in position, I save the frame in the debug camera view, then upload to frigate+. It would be great if I could just push a button on the debug view and submit that frame easily. This allows for specific posing of family members and specific frames to be submitted in realtime.
another addition to this could be auto-upload of some sort, make frigate create some extra snapshots when a frigate event becomes "out of frame" and therefore ended, and "back in the frame", so we can be the judge in frigate+ if this was a correct assumption, and with that reduce events ending and restarting often, while the object remained in place.
Please add a SENT TO FRIGATE+ button on the recording page to send the current frame. For tuning custom model this would be very useful.
If it's help the model's learning a mode where several snapshots are generated for the same event. Is it better to send an object with a lower score to Frigate+ instead of sending a "top score snapshot"?