R image processing
These PR is a quality of life improvement for R users of autofocus. The old example assumed that images have already been preprocessed and/or zipped together. This is not the case when someone has a whole bunch of camera trap images in hand.
This new example process_predict_example.R contains a suite of functions that can be used to:
- Collect the file names of images that you want to process (via Dan Acheson)
- Process the images in a way similar to
process_raw.py. We remove the bottom 198 pixels and then reduce to 760x512 pixels. - Zip images together into 'bundles' of 10.
- Post the zip files to autofocus, which goes much faster than posting single images.
- Process the output from autofocus to generate the 'most likely' estimate in a photo (i.e., going from the many probability statements to the maximum probability statement).
In R, you then end up with a svelt data.frame that contains the original file name and what the most likely species in that photo is. As an example:
best_ids
# A tibble: 7 x 3
file species probability
<chr> <chr> <dbl>
1 C:/Users/mfidino/Documents/GitHub/autofocus/~ squirr~ 0.584
2 C:/Users/mfidino/Documents/GitHub/autofocus/~ bird 0.986
3 C:/Users/mfidino/Documents/GitHub/autofocus/~ raccoon 0.960
4 C:/Users/mfidino/Documents/GitHub/autofocus/~ rabbit 0.934
5 C:/Users/mfidino/Documents/GitHub/autofocus/~ raccoon 0.997
6 C:/Users/mfidino/Documents/GitHub/autofocus/~ skunk 0.999
7 C:/Users/mfidino/Documents/GitHub/autofocus/~ deer 1.000
Finally, the images (and associated zip files) that get processed are treated as temporary files so you don't have to have to create a secondary batch of images. This is most useful for the prediction side of autofocus (for training we'd probably want to retain the processed images).
Got it, will review when I get a chance.