darknet
darknet copied to clipboard
Detection of small and large objects with different image resolutions and distances
Hi
Im trying to detect multiple small and large objects (workers on the construction field can have small size if image was taken from far distance or large if it was closed). I include two sample images just to show you the size of the objects trying to detect and how it looks like in the image.
Here is my data Set:
1 class
923 images for training and 116 images for validation
Images resolution are different as were taken from different cameras such as: 2112 x 1584, 1920x1080, 3264x1836
So workers are dynamic objects that can change the locations/positions (as they can stand/seat and work or can move)
Training images are not same , changes in objects dimensions depending from which distance were taken.
Detection images are almost the same as training images.
So I have some questions : 1- Which cfg file you recommend to use for training? 2- What is the recommended width and height to use for training and detection? 3- How to calculate correct anchors for my data set? 4- What is the num_of_clusters , final_width and final_height and how to calculate it for my data set?
Here the detected clusters when running ./darknet detector calc_anchors /home/darknet/build/darknet/x64/data/workers.data -num_of_clusters 9 -width 640 -height 640 -show
num_of_clusters = 9, width = 640, height = 640
read labels from 921 images
loaded image: 921 box: 5858
all loaded.
calculating k-means++ ...
iterations = 63
avg IoU = 75.19 %
Saving anchors to the file: anchors.txt
anchors = 8, 25, 16, 53, 21, 88, 31, 66, 31,125, 50,106, 47,199, 86,160, 72,294
Also I can not get better maP of 90 % even after 40 000 iterations. And the avarege loss is not getting better than 1 as well . Any help?
Hello !
I have the same problem for small defects detection on large surfaces. Have you managed to solve the problem you had with small workers ?