DVR-Scan
DVR-Scan copied to clipboard
Brightness change detected as motion
Hello , it's a great tool, but is there any way to handle brightness changes (like sun and clouds ) without triggering a motion detection? Maybe by defining the max size that a changing area/blob can have (like 25% of the full image) ?
Hey @bossjl;
Hoping some others more familiar with methods to resolve this can chime in. Ideally the brightness changes could be smoothed out with another method (e.g. histogram matching), but what you suggest also seems feasible as well. It would be the same thing as setting both a minimum and a maximum threshold (versus what happens now, which is just a minimum). I'm unsure as to how robust of a solution that would be, so I definitely would like to do some more research and hear some other ideas on the topic first.
As-is, I think anyone should be able to add in a maximum threshold argument, so I'll tag this as help wanted for now.
Thanks for the suggestion!
@bossjl do you happen to have any sample videos exhibiting this that I could use for testing? Thanks!
Did some quick research into this, it may be worth investigating if OpenCV exposure compensation will work here. It's primarily intended for stitching different images together, so it might be overkill - need to do some performance checks.
Other than that, we can consider some kind of histogram matching as a first pass, and occasionally update the reference histogram when some measure - say the average brightness of the frame - changes by a certain threshold. The downside of this however is it might be a relatively large performance hit, so it would likely need to be an option rather than always enabled.
Edit: ~A relatively simple solution might be to average each frame, and use it to calculate a rolling average. Then, multiply all pixels in the frame by the required amount so that it's average is the same as the rolling average.~ Too simple - this causes even worse results, since it also affects light areas which may not be underexposed. Need to consider some kind of exposure compensation algorithm which may apply different corrections to different parts of the frame.
@Breakthrough hi, i'm interested in this too and looking for solutions, if you want i can provide you with a sample video for testing (contains both real motion and brightness change caused by sun and clouds)
Hello @gch1p, yes if you could provide a sample that would be very useful. Thank you!
Hello @gch1p, yes if you could provide a sample that would be very useful. Thank you!
Alright, can you give me your email? Or some other contact where I can send it privately
Is it possible to post it publicly here? If possible I would like to add it to the repo to use as a future development / test cases. If not, that's understandable. Thanks!
Hi. hoping to revive this. Attaching a false-positive from clouds passing by. You can use it at will. May I suggest doing the exposure compensation technique but not on every frame but only on the ones that have triggered the "normal" detection method? Like a "second-phase" check. BrightnessFalsePositive.zip Great program! Keep at it :)
btw, this was generated with --threshold 0.85 -a 144 911 445 907 474 1080 141 1080 (a very small region on the bottom left driveway)
Hi. hoping to revive this. Attaching a false-positive from clouds passing by. You can use it at will. May I suggest doing the exposure compensation technique but not on every frame but only on the ones that have triggered the "normal" detection method? Like a "second-phase" check. BrightnessFalsePositive.zip Great program! Keep at it :)
Thanks for the sample! Exposure compensation is definitely the right way to go, however it might be very expensive if it needs to be done as a second pass. This is because you would need to run background subtraction twice on each frame - once to detect the threshold without updating the model, and again just to update the model. I'm not opposed to pursuing this solution, but would like to think about alternatives too that might provide better performance.
That being said would probably be not too difficult to try something like you suggested, the new API internally for DVR-Scan is quite hackable in this regard: https://github.com/Breakthrough/DVR-Scan/blob/main/dvr_scan/subtractor.py
Would be happy to see any PRs that might add support for this, even if it isn't that efficient.
I wonder if a better solution might lie in histogram correction or keeping a running average of the current exposure level, and using that to compensate frames as they are fed into the pipeline. Thoughts?
Edit: I tried some of the methods outlined below but had some difficulty making it consistent: https://stackoverflow.com/questions/56905592/automatic-contrast-and-brightness-adjustment-of-a-color-photo-of-a-sheet-of-pape/56909036
Might be worth also looking into how OpenCV does exposure comp for image stitching: https://github.com/opencv/opencv/blob/ae347ab493110eb774189fa6e533838ad498da5d/modules/stitching/src/stitcher.cpp#L204
There's a few other parameters that the background models can set which I should add config file options for. In particular I suspect the history size would be pretty relevant as it likely needs to be adjusted based on framerate. I tried lowering the history to 200 (default 500) and increasing the variance threshold to 100 (default 16) and had some success with reducing false positives. These parameters are described more here: https://docs.opencv.org/3.4/d7/d7b/classcv_1_1BackgroundSubtractorMOG2.html#ab8bdfc9c318650aed53ecc836667b56a
Adding config file options for these has long been on my TODO list, but likely won't fix all cases like this where there are rapid brightness changes. After giving it some more thought, I suspect histogram matching might be the way to go. In the processing pipeline, the input to the subtractor model must be a 1-channel image. To filter out brightness changes across frames, a histogram could be calculated on each frame, and used to calculate an average histogram for the past N frames. This could be used to correct the frame before subtraction, by shifting each pixel value such that the resulting histogram matches the calculated average.
This should make things more robust to sudden brightness changes covering a large portion of the frame, while still preserving enough local contrast for areas with motion to still be distinguishable. I haven't had much time to prototype this yet, but it should be doable with reasonable performance.
I was actually able to make a reasonable improvement on that test case you provided @jchennales. Using scikit.exposure.match_histograms
with just the first frame of the video as a reference, here is what the result looked like: issue53-corrected.zip (no false positives either anymore)
The color space conversion is just using BGR2GREY right now, so using HSV or CIE/LAB would probably help a bit too. This proves that the idea can work in certain cases. Still need to figure out how to deal with changing the reference temporally over time, as abrupt changes to the matching could cause false positives itself.
Edit: Even without the reference frame changing this is still probably worth adding as a feature for testing.