tensorflow-onnx icon indicating copy to clipboard operation
tensorflow-onnx copied to clipboard

YOLOV8Detector with non_max_suppression is not converted

Open ksv87 opened this issue 1 year ago • 3 comments

Describe the bug I trained the YOLOV8Detector model in Kerala CM, then I converted it to saved_model format and then converted it to ON format. If I convert a model without decode_prediction, everything works, but the output of the model is not clear how to interpret, there are two outputs: boxes (1, 8400, 64) and classes (1, 8400, 3) If I convert a model with decode_prediction, the output of which is clear, 4 outputs: boxes (1, 100, 4), confidence (1, 100), class (1, 100), num_detections (1,) I get an error when converting And when loading the model

Urgency week

System information

  • OS Platform and Distribution: Linux Ubuntu 22.04
  • TensorFlow Version: 2.15
  • Python version: 3.11.5
  • ONNX version: 1.14.0
  • ONNXRuntime version: 1.16.3
  • tf2onnx version: 1.14.0

To Reproduce Convert keras_cv to saved model:

model = keras_cv.models.YOLOV8Detector.from_preset(
    "yolo_v8_s_backbone_coco",
    num_classes=3,
    bounding_box_format="xyxy",
    fpn_depth=2,
    load_weights=True,
)

model.load_weights("./models/yolov8_with_yolo_v8_s_backbone.keras")

def export_model(model):
    @tf.function(input_signature=[tf.TensorSpec([None, None, None, 3], tf.float32)])
    def serving_fn(image):
        pred = model(image)
        label = model.decode_predictions(pred, image)
        return {"boxes": label["boxes"], "confidence": label["confidence"], "classes": label["classes"], "num_detections": label["num_detections"],}

    return serving_fn

tf.saved_model.save(
    model,
    export_dir="test_saved_model",
    signatures={"serving_default": export_model(model)},
)

Convert saved model to ONNX:

python -m tf2onnx.convert --saved-model test_saved_model --output yolov8_with_yolo_v8_s_backbone.onnx --opset 18

errors:

2024-01-25 15:19:14,749 - ERROR - Tensorflow op [StatefulPartitionedCall/non_max_suppression/BroadcastArgs_1: BroadcastArgs] is not supported
2024-01-25 15:19:14,751 - ERROR - Tensorflow op [StatefulPartitionedCall/non_max_suppression/BroadcastArgs_2: BroadcastArgs] is not supported
2024-01-25 15:19:14,754 - ERROR - Tensorflow op [StatefulPartitionedCall/non_max_suppression/BroadcastArgs: BroadcastArgs] is not supported
2024-01-25 15:19:14,799 - ERROR - Unsupported ops: Counter({'BroadcastArgs': 3})

Load ONNX in ONNXRuntime:

m = rt.InferenceSession("yolov8_with_yolo_v8_s_backbone.onnx", providers=providers)

Error:

onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from yolov8_with_yolo_v8_s_backbone.onnx failed:This is an invalid model. In Node, ("StatefulPartitionedCall/non_max_suppression/BroadcastArgs_1", BroadcastArgs, "", -1) : ("StatefulPartitionedCall/non_max_suppression/TensorScatterUpdate_4:0": tensor(int32),"StatefulPartitionedCall/non_max_suppression/TensorScatterUpdate_5:0": tensor(int32),) -> ("StatefulPartitionedCall/non_max_suppression/BroadcastArgs_1:0",) , Error No Op registered for BroadcastArgs with domain_version of 18

Additional context If it is possible to somehow process the output in the model without decode_prediction and do the processing of the model output (boxes (1, 8400, 64) and classes (1, 8400, 3)) and apply non maximum suppression on the onnx side, then this can be considered a not problem.

ksv87 avatar Jan 25 '24 12:01 ksv87

Hi, I was facing the same problem and solved by wrapping the prediction decoding to the model outputs and, finally, converting the wrapped model.

from keras_cv.src.models.object_detection.yolo_v8.yolo_v8_detector import YOLOV8Detector, decode_regression_to_boxes, dist2bbox, get_anchors
def convert_to_onnx(model: YOLOV8Detector, onnx_opset: int, results_dir: Path):
    wrapper_input = tf.keras.Input(shape=model.input_shape[1:], name="images")
    wrapper_outputs = model(wrapper_input)
    anchor_points, stride_tensor = get_anchors(image_shape=model.input_shape[1:3])
    stride_tensor = tf.expand_dims(stride_tensor, axis=-1)
    boxes = dist2bbox(decode_regression_to_boxes(wrapper_outputs["boxes"]), anchor_points) * stride_tensor
    wrapper_outputs["boxes"] = boxes
    wrapper = tf.keras.Model(wrapper_input, wrapper_outputs, name="YOLOv8")
    tf2onnx.convert.from_keras(wrapper, output_path=results_dir / "model.onnx", opset=onnx_opset)

Mauro-Antonello avatar Feb 07 '24 07:02 Mauro-Antonello

@ksv87 ,

I think there are 2 issues you are facing when convert the model to an ONNX one:

  1. The outputs without decode_prediction are not expected. Actually @Mauro-Antonello has provided a popular approach to handle such case: add a wrapped model so the outputs of your model are intermediate results.
  2. There is an unsupported op BroadcastArgs in your model. This will cause the final ONNX model could not be inferenced by ORT successfully. This needs some work in tf2onnx to support and it is not in a plan yet. But your contributions on this are definitely welcome!

fatcat-z avatar Mar 11 '24 03:03 fatcat-z