Smooth_AP icon indicating copy to clipboard operation
Smooth_AP copied to clipboard

About the F1 of one test dataset

Open jixiedy opened this issue 4 years ago • 1 comments

May I ask you 2 question:

  1. why The F1 function in ‘auxiliaries/eval_ metrics_ one_dataset 'is annotated, and instead it is set to 0?
  2. For an image retrieval dataset similar to VehicleID but with only one gallery, what needs to be modified for training and testing?

jixiedy avatar Oct 13 '20 03:10 jixiedy

Hey,

  1. Thank you for noting this, I had commented this out for a period during development to increase evaluation speed when I was looking at some metrics other than F1. If you would like to compute F1 then uncomment line 221 in 'auxiliaries/eval_ metrics_ one_dataset'
  2. So for both training and testing, you just need a dictionary where the keys are the class names, and the corresponding values are the list of image paths for the images in that class.

For example in 'datasets/give_VehicleID_dataset', lines 116 to 164 are simply preparing these dictionaries (train_image_dict, small_test_dict, medium_test_dict, large_test_dict). This is not specialised for Vehicle ID, and works similarly for any retrieval dataset (in your case there is just one test dictionary).

For a new dataset, you must create these dictionaries, and then pass them to TrainDatasetsmoothap() and BaseTripletDataset(), for the training and test set, respectively. A more similar example to yours is INaturalist, which has just one test gallery. This dataset is formed in lines 49-101 in datasets.py . If you copy give_inaturalist_datasets() and replace the generation of train_image_dict and val_image_dict with the paths to your dataset, then that should work.

Let me know if you still have trouble

Andrew-Brown1 avatar Oct 13 '20 12:10 Andrew-Brown1