robustTemplateMatching
robustTemplateMatching copied to clipboard
Major detection bugs fixed
The changes are as follows:
-
NCC_part
had a tendency to go out of index in some type of images. That has been fixed using edge case checkers. - Added an option to provide a manual threshold during the commandline argument.
- Added Cython error fixed command in the README.md file. This will help us build the Cython files in modern Python renditions.
- Re-factored code.
- Avoided potential issues of running into NaNs by adding an eps during matrix/tensor operations.
Credits: Some of the code has been taken from this repository, and I have cleaned it, and done extensive manual QA testing.
Thank you, Arka Mukherjee
Dude, you've done a great work! Met the same problem, and your changes work perfectly.
(ps: Do you know any other great template matching methods for multi-modal images or have you made some improvements to this repo? The code in this repo still have dozens of pixel offset when using my datasets.
@Sly-Guo Thanks for the feedback and for testing the code.
QATM a little more robust. @kamata1729 has another repo with QATM. This particular paper uses NCC, which has caveats as highlighted in the algorithm as we do generic correlation after extracting features from a CNN, so there are the same downsides with the regular correlation process.
@Sly-Guo The threshold is configurable now too with my bug fixes, that could help (it was locked before).
Got it! Thanks a lot for your work!
@Arka161 tried QATM (and also thanks your PR in that repo, it really helps), while the results is worse than this robustTemplateMatching. Maybe it's because QATM won't work well on multi-modal images? (considering that my data are multi-modal