Allow min/max area filters to be configurable based on location
Describe the problem you are having
Hello,
I'm trying to filter out false positives using min_area/max_area. However the numerical values of min_area/max_area depend if the person is near or away from the camera. Therefore I'm using zones:
zones:
near:
coordinates: ...
filters:
person:
min_area: 90000
max_area: 400000
away:
coordinates: ...
filters:
person:
max_area: 130000
This works as expected (#1738)
However I'm still receiving events for detection labeled as person without any zone on MQTT events topic.
I may filter them out using Home Assistant (i.e., ensuring that there is a zone), but this adds increased complexity (cameras with zones, cameras without zone, etc.)
I'm already using the required_zones parameter:
record:
enabled: true
retain_days: 0
events:
objects:
- person
required_zones:
- near
- away
mqtt:
required_zones:
- near
- away
IMO using required_zones in mqtt should prevent any MQTT message (including events) if the detection is not in the required_zones.
What is your opinion?
Is there another way to achieve the same goal?
Thanks for your hard work for this project!
Version
0.10.0-c1155af
Frigate config file
See relevant parts above
Relevant log output
N/A
FFprobe output from your camera
N/A
Frigate stats
N/A
Operating system
Other Linux
Install method
Docker Compose
Coral version
PCIe
Network connection
Wired
Camera make and model
N/A
Any other information that may be helpful
No response
The required zones under mqtt only prevent the snapshot topics from being published for the event.
setting min/max 1) at cam 2) at zone and 3) at object would be nice ..
😄 https://github.com/blakeblackshear/frigate/issues/2282 because an elephant is bigger than a mouse, but as same as small when the elephant is far away and this mouse sits directly before the cam .. 😄
You can already define a min/max by cam/zone/label But the issue (well this is a feature at the moment) is that events are still published (they are not considered as detection false positives but as zone false positives) The purpose of min_area/max_area is to detect false positive, isn't it? Then why publishing events that don't match this criteria? EDIT: the workaround is to create at least one zone per camera (even if the zone is the whole image) then filter out events without zone, but this is not straightforward
Zone filters restrict when a zone is listed for an event and don't impact whether or not the object is considered a false positive. Object filters (camera level) will filter out objects as false positives for the camera.
If you have 3 zones defined and they are all listed as required zones for snapshots and record, the events that don't enter any of the zones are just discarded at the end because it doesn't have any of the required zones, not because it is considered a false positive. Frigate still detects it and tracks it actively as a true positive until the event is over.
Zones aren't meant to be used as a tool to identify false positives. They are a tool to determine when an object has entered an area of interest.
I'm not suggesting this is as clear as it should be. Just explaining how it works today.
Whether or not an object is a false positive is not based on a single frame. It's a computed value based on the median of the last 10 scores as it is being tracked. Once it crosses the threshold, it is considered a true positive no matter where it goes. In this paradigm, you can't simply look at a single frame where the object is in a zone. There does need to be a way to adjust object filters based on the position of the object, but that's totally different than what the current zone filters are meant for. Long term, I hope to be able to infer appropriate sizes based on past detections.
Thanks for the explanation Then the question is: how to filter false positives using min_area/max_area considering that the size of the objects depends on their distance from the camera?
Maybe there is no solution at the moment, in which case what would be the best solution in your opinion? I can try to implement it.
It's a computed value based on the median of the last 10 scores as it is being tracked
This is a good way to reduce FP. But I still have FP and min_area/max_area look quite effective to filter out them But I got the point, zones were not designed for that
The best workaround at the moment is to define zones with filters and then ensure they are all listed in required_zones. This won't have any impact on the events mqtt topic, so you will have to use a condition to check for at least one zone in entered_zones.
The object filters are incorporated into the processing pipeline much further upstream during object detection. This is where you would want to filter false positives based on location and size, but it has no awareness of zones at that stage. Zone filters are applied to much later in processing.
feels like still i good enhancement if it would be possible to define min/max as a general default for every object-kategorie.
a min/max definition would improve the reliability of the inference results of a certain AI model used, if one could exclude false-detected objecttypes through human correction basically illogical sizes of certain objectstypes.
of course best on camera level, because these have specific resolutions = frame sizes.
If one could use this filtering mechanism via min/max-definitions also on a later stage. deeper within the usual zones, one could catch also further some few special FP in some concrete zone situations.
As a first start, a basic human definition of logical min/max values per object would help to exclude false positives due to illogical sizes for small objects (and vice versa).
(an AI model with image segmentation would also help. probably not due to TF-Lite, but one could define min/max areas for specific contours and object shapes. Maybe an idea for a distant future of frigate...)
You already can define min/max values per object type for each camera.
You already can define min/max values per object type for each camera.
sure, you're right. i had overlooked that not quite in mind anymore, although i use that of course.
my original problem with the false positives still persists, like with the thread-starter, that to minimize fPs for the same object type in the foreground and in the background of the same camera, i need two different size definitions.
i could maybe use two zones for that, but that's not really what they are for and it wouldn't be the way designed for this problem, i guess .... 💁♂️
I have a few cases particularly like with my doorbell where the camera is low and sees distant approaching objects where being able to be zone specific would help
So this is still open.. ? Or was it implemented?
In this issue https://github.com/blakeblackshear/frigate/discussions/17839 it is indicated it should be supported already? is it?
zones:
front_yard:
filters:
person:
max_area: 400000
I have a large inflatable in my front yard that is setting off person detection every time it inflates in the evening. its area is MUCH larger than what I have defined here..
I can't set it at camera level as a person can walk very close to the camera and take up more area.
It stops the object from being in the zone but not from being detected in general
It stops the object from being in the zone but not from being detected in general
I am not sure I want to burn more training false positives to stop the giant inflatable reindeer in my front yard from being detected as a person every time it inflates or is windy.
it is clearly WAY larger than a person would be (in that zone by the bottom bound) but unfortunately a person can take up that much area on two separate cameras if they walk close to them.
That a similar problem with Halloween but was able to deal with those a different way.
Edit:
Dug deeper into the object timeline and found bounding box inconsistency smaller than my max_area..
guess I will have to keep submitting more false positives