yolov5
yolov5 copied to clipboard
onnxruntime inference
Search before asking
- [X] I have searched the YOLOv5 issues and discussions and found no similar questions.
Question
I've converted yolov5s.pt to yolov5s.onnx,Now I want to use onnxruntime for inference,but all the final result is 0,like this: "Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000"
Additional
the python code is : import os import cv2 import numpy as np import onnxruntime as ort from pathlib import Path from tqdm import tqdm from pycocotools.coco import COCO from pycocotools.cocoeval import COCOeval from utils.general import coco80_to_coco91_class import json
DATASET_PATH = '/COCO2017'
MODEL_PATH = './yolov5s.onnx'
IMG_SIZE = 640
CONF_THRESH = 0.1
IOU_THRESH = 0.6
data_paths = { "train_images": os.path.join(DATASET_PATH, "train2017.txt"), "val_images": os.path.join(DATASET_PATH, "val2017.txt"), "annotations_train": os.path.join(DATASET_PATH, "annotations", "instances_train2017.json"), "annotations_val": os.path.join(DATASET_PATH, "annotations", "instances_val2017.json"), }
def xywh2xyxy(x): y = np.copy(x) y[..., 0] = x[..., 0] - x[..., 2] / 2 y[..., 1] = x[..., 1] - x[..., 3] / 2 y[..., 2] = x[..., 0] + x[..., 2] / 2 y[..., 3] = x[..., 1] + x[..., 3] / 2 return y
def xyxy2xywh(x): y = np.copy(x) y[..., 0] = (x[..., 0] + x[..., 2]) / 2 y[..., 1] = (x[..., 1] + x[..., 3]) / 2 y[..., 2] = x[..., 2] - x[..., 0] y[..., 3] = x[..., 3] - x[..., 1] return y
def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, max_det=300): xc = prediction[..., 4] > conf_thres output = [np.zeros((0, 6))] * prediction.shape[0] for xi, x in enumerate(prediction): x = x[xc[xi]] if not x.shape[0]: continue x[:, 5:] *= x[:, 4:5] box = xywh2xyxy(x[:, :4]) conf = x[:, 4] j = np.argmax(x[:, 5:], axis=1) x = np.concatenate((box, conf[:, None], j[:, None]), axis=1)[conf > conf_thres]
if not x.shape[0]:
continue
c = x[:, 5:6] * 4096
boxes, scores = x[:, :4] + c, x[:, 4]
i = cv2.dnn.NMSBoxes(boxes.tolist(), scores.tolist(), conf_thres, iou_thres)
output[xi] = x[i].reshape(-1, 6)[:max_det]
return output
def preprocess_image(image_path): img = cv2.imread(image_path) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = cv2.resize(img, (IMG_SIZE, IMG_SIZE), interpolation=cv2.INTER_LINEAR) img = img.transpose(2, 0, 1).astype(np.float32) img /= 255.0 return np.expand_dims(img, axis=0)
def infer_with_onnxruntime(session, img_tensor): input_name = session.get_inputs()[0].name outputs = session.run(None, {input_name: img_tensor}) return outputs[0]
def save_coco_results(predictions, image_ids, coco, output_file): results = []
class_map = {i: cat_id for i, cat_id in enumerate(sorted(coco.getCatIds()))}
class_map = coco80_to_coco91_class()
print("Class map:", class_map)
for preds, img_id in zip(predictions, image_ids):
for pred in preds[0]:
print("Shape of pred:", pred.shape)
box = pred[:4]
conf = pred[4]
print("Value of pred[5]:", pred[5])
cls = int(pred[5])
if cls >= len(class_map):
print(f"Skipping invalid class index: {cls}")
continue
category_id = class_map.get(cls, None)
category_id = class_map[cls]
if category_id is None:
print(f"Unknown class ID: {cls}, skipping...")
continue
print(f"Image ID: {img_id}, Category ID: {category_id}, Box: {box}, Score: {conf}")
print(f"Predicted COCO80 cls: {cls}, Mapped COCO91 category_id: {category_id}")
box = xyxy2xywh(np.array(box).reshape(1, 4))[0]
box = [max(0, round(x, 3)) for x in box]
results.append({
"image_id": int(img_id),
"category_id": category_id,
"bbox": box,
"score": round(conf, 5)
})
with open(output_file, 'w') as f:
json.dump(results, f)
def run_inference(images_file, annotations_file, session, dataset_name): coco = COCO(annotations_file) with open(images_file) as f: image_paths = [line.strip() for line in f.readlines()] predictions, image_ids = [], []
for img_path in tqdm(image_paths, desc=f"inference: {dataset_name}"):
img_tensor = preprocess_image(os.path.join(DATASET_PATH, img_path))
preds = infer_with_onnxruntime(session, img_tensor)
preds = non_max_suppression(preds, conf_thres=CONF_THRESH, iou_thres=IOU_THRESH)
predictions.append(preds)
image_ids.append(Path(img_path).stem)
output_file = f"coco_predictions_{dataset_name}.json"
save_coco_results(predictions, image_ids, coco, output_file)
coco_dt = coco.loadRes(output_file)
coco_eval = COCOeval(coco, coco_dt, iouType="bbox")
coco_eval.evaluate()
coco_eval.accumulate()
coco_eval.summarize()
def main(): session = ort.InferenceSession(MODEL_PATH, providers=["CUDAExecutionProvider"])
run_inference(data_paths["train_images"], data_paths["annotations_train"], session, "train2017")
run_inference(data_paths["val_images"], data_paths["annotations_val"], session, "val2017")
if name == "main": main()
👋 Hello @ZCzzzzzz, thank you for your interest in YOLOv5 🚀!
It seems you are running into issues when using ONNX Runtime for inference with your YOLOv5 model, leading to zero results in your evaluation metrics. To assist you better, could you kindly provide a minimum reproducible example (MRE)? This should include the specific input data, model conversion details, and any additional configurations you used. The goal is to allow us to reproduce and debug the issue effectively. 🙏
In the meantime, please ensure the following:
- You are using Python>=3.8.0 with all required dependencies installed, including PyTorch and ONNX Runtime.
- You followed the proper export process when converting your
yolov5s.ptmodel to ONNX format. - You verified that the ONNX model outputs are as expected before proceeding with COCO evaluation.
Additionally, double-check the preprocessing pipeline to confirm that the inputs to your ONNX model match the same format and normalization as expected by the original PyTorch model.
This is an automated response, but an Ultralytics engineer will review the details of your issue and assist you further soon. Thank you for your patience! 😊
@ZCzzzzzz it seems your ONNX runtime inference is producing zero results, which could be due to several factors. Please check the following:
-
Input preprocessing: Ensure the input images are preprocessed correctly to match the input format used during training. Verify the resizing, normalization, and image dimension order (e.g., RGB vs. BGR).
-
Model outputs: Double-check that the ONNX model outputs match the expected format. You may use tools like Netron to inspect the model structure.
-
Non-Max Suppression (NMS): Validate your custom NMS implementation. Consider testing the ONNX model with simpler inputs to confirm predictions before applying NMS.
-
Export issues: Ensure the ONNX model was exported correctly using the recommended YOLOv5 export command:
python export.py --weights yolov5s.pt --img 640 --batch 1 --device 0 --include onnx -
ONNX runtime setup: Verify that your ONNX runtime environment (e.g., CUDAExecutionProvider) is set up correctly.
If the issue persists, try running inference with the PyTorch model to confirm that the problem is isolated to the ONNX workflow. For more details, refer to the YOLOv5 ONNX Export Guide.