open_model_zoo icon indicating copy to clipboard operation
open_model_zoo copied to clipboard

Replace -m and -s with -r for directly comparing results?

Open PhantomzBack opened this issue 4 years ago • 5 comments

I have an external model, which I have already trained and got the corresponding results. Is it possible to do the accuracy check for the results directly, rather than running the model again on the input images?

So this is what I mean accuracy_check -c path/to/configuration_file -m /path/to/models -s /path/to/source/data -a /path/to/annotation

should be replaced with

accuracy_check -c path/to/configuration_file -r path/to/model_execution_results -a /path/to/annotation

PhantomzBack avatar Mar 24 '21 11:03 PhantomzBack

@eaidova could you please comment?

vladimir-dudnik avatar Mar 24 '21 12:03 vladimir-dudnik

@raviarora1209 I do not think that it is possible in your described way (throw -r) because we do not know format in which your model provide the results and how they should be parsed.

There are 2 options which you can consider:

  1. offline evaluation mode - handle raw output exclude inference part only (adapters, postprocessing preserved ). Predictions provided in pickle file using --stored_predictions option. It should contains list of predictions wrapped to StoredPredictionBatch class, where
  • raw_predictions - dictionary with layer_names as keys and corresponding tensors in numpy arrays as values.
  • identifiers - list of images which these predictions are corespond in the batch.
  • meta - some additional info required for decoding (e.g. input image size) sotred as list per image dictionary
  1. dummy launcher - dummy launcher can handle text files with predictions like json, xml. But possibly implementing some additional parsing logic will require. Currently we work with json meta from GVA detect/classify as external pipeline (not sure that it is informative example for you), but I can provide example configuration if need

eaidova avatar Mar 24 '21 12:03 eaidova

@eaidova what if it were a well known Object Detection format such as COCO, or YOLO? We wish to refrain from executing the model again and again( and also if the model is not supported by OpenVINO but the format is the same as one of the standard ones).

PhantomzBack avatar Mar 24 '21 15:03 PhantomzBack

@raviarora1209 Currently it is not supported, but we may consider this opportunity in the future. Do you have preference on specific format? I believe for coco format you can directly use pycocotools without any proxy like AC, right? I am not familiar with Darknet and not hear about prediction format from it (only about yolo annotation, which is not suitable for working as prediction because it does not contain info about predicted bbox score, so no opportunity for ranking boxes, it is important for metric calculation) Also OpenVINO is primary use case for AccuracyChecker, but not only. Among supported inference launchers: Caffe, TensorFlow, MXNet, ONNXRunTime, OpenCV DNN. PyTorch and PaddlePaddle

eaidova avatar Mar 24 '21 15:03 eaidova

Alright, thanks.... I have not explored much on pycocotools but I will check that out, thanks! If your considering a specific format, maybe consider some of the standard formats, or you could possible create a very simple format, to which anybody can easily translate there results.

Cheers!

PhantomzBack avatar Mar 25 '21 16:03 PhantomzBack