Wang, Mengni
Wang, Mengni
I found there are some models lack data preprocess details and even accuracy info, such as https://github.com/onnx/models/tree/master/vision/classification/inception_and_googlenet/inception_v2 Where can I get these details to reproduce results?
# EfficientNet-Lite4 ## Use Cases EfficientNet-Lite4 is an image classification model that achieves state-of-the-art accuracy. It is designed to run on mobile CPU, GPU, and EdgeTPU devices, allowing for applications...
Signed-off-by: Mengni Wang ## Type of Change bug fix ## Description detail description JIRA ticket: https://jira.devtools.intel.com/browse/ILITV-2529
Signed-off-by: mengniwa ## Type of Change example update ## How has this PR been tested? extension test
## Type of Change feature ## Description Support FP16 for onnxrt adaptor
# Bug Report ### Which model does this pertain to? https://github.com/onnx/models/tree/main/vision/object_detection_segmentation/retinanet https://github.com/onnx/models/tree/main/text/machine_comprehension/roberta ### Describe the bug For retinanet, with preprocess and postprocess code in readme, I can't get mAP 0.376....
## Type of Change bug fix ## Description Fix onnxrt calibration issue ## Expected Behavior & Potential Risk the expected behavior that triggered by this PR ## How has this...
## Type of Change bug fix ## Description Fix code_detection example export issue ## Expected Behavior & Potential Risk the expected behavior that triggered by this PR ## How has...