YOLO-World icon indicating copy to clipboard operation
YOLO-World copied to clipboard

deploy/image-demo.py inference onnx model AssertionError: No texts found in results.

Open XEssence opened this issue 1 year ago • 6 comments

I use deploy/export_onnx.py export onnx model, the command as follows: python deploy/export_onnx.py configs/finetune_coco/yolo_world_v2_m_vlpan_bn_2e-4_80e_8gpus_mask-refine_finetune_coco.py work_dirs/epoch_80.pth --custom-text data/texts/coco_class_texts.json --model-only --opset 12 and then, python deploy/image-demo.py ./test_images/ configs/finetune_coco/yolo_world_v2_m_vlpan_bn_2e-4_80e_8gpus_mask-refine_finetune_coco.py work_dirs/epoch_80.onnx, error as follows:

File "/data/YOLO-World-master/deploy/image-demo.py", line 152, in <module>
    main()
  File "/data/YOLO-World-master/deploy/image-demo.py", line 96, in main
    data, samples = test_pipeline(dict(img=rgb, img_id=i)).values()
  File "/data/anaconda/envs/yolo-world/lib/python3.9/site-packages/mmcv/transforms/base.py", line 12, in __call__
    return self.transform(results)
  File "/data/anaconda/envs/yolo-world/lib/python3.9/site-packages/mmcv/transforms/wrappers.py", line 88, in transform
    results = t(results)  # type: ignore
  File "/data/YOLO-World-master/yolo_world/datasets/transformers/mm_transforms.py", line 114, in __call__
    assert 'texts' in results or hasattr(self, 'class_texts'), (
AssertionError: No texts found in results.

What causes this?

XEssence avatar Apr 11 '24 11:04 XEssence

The demo in deploy is not applicable now.

wondervictor avatar Apr 11 '24 13:04 wondervictor

我遇到了同样的问题,请问还有什么办法运行这个ONNX模型吗?

tomgotjack avatar Apr 18 '24 16:04 tomgotjack

Hi @tomgotjack and @XEssence, I can now provide you with a simple demo code. The scripts to run the ONNX demo have not been ready yet.

  1. import libs
import onnx
import onnxruntime as ort
from PIL import Image, ImageOps
import numpy as np
import supervision as sv
import matplotlib.pyplot as plt

BOUNDING_BOX_ANNOTATOR = sv.BoundingBoxAnnotator()
LABEL_ANNOTATOR = sv.LabelAnnotator()
MASK_ANNOTATOR = sv.MaskAnnotator()
  1. load data
def load_image(image_path):
    image = Image.open(image_path).convert('RGB')
    # Get sample input data as a numpy array in a method of your choosing.
    img_width, img_height = image.size
    size = max(img_width, img_height)
    image = ImageOps.pad(image, (size, size), method=Image.BILINEAR)
    image = image.resize((640, 640), Image.BILINEAR)
    tensor_image = np.asarray(image).astype(np.float32)
    tensor_image /= 255.0
    tensor_image = np.expand_dims(tensor_image, axis=0)
    return tensor_image, (img_width, img_height, size)
  1. simple visualization
def visualize(results, img):
    bboxes = results[2][0]
    scores = results[1][0]
    labels = results[0][0]
    bboxes = bboxes[labels >= 0]
    scores = scores[labels >= 0]
    labels = labels[labels >= 0]
    
    print(bboxes.shape)
    detections = sv.Detections(xyxy=bboxes, class_id=labels, confidence=scores)
    labels = [
        f"{texts[class_id][0]} {confidence:0.2f}" for class_id, confidence in
        zip(detections.class_id, detections.confidence)
    ]

    # label images
    image = (img * 255).astype(np.uint8)
    anno_image = image.copy()
    image = BOUNDING_BOX_ANNOTATOR.annotate(image, detections)
    image = LABEL_ANNOTATOR.annotate(image, detections, labels=labels)
    return image
  1. load ONNX runtime model
ort_session = ort.InferenceSession(onnx_file_name,providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
provider_options = ort_session.get_provider_options()
  1. run a sample
img, meta_info = load_image(image_path)
input_ort = ort.OrtValue.ortvalue_from_numpy(img.transpose((0, 3, 1, 2)))
results = ort_session.run(["labels", "scores", "boxes"], {"images": input_ort})
img_out = visualize(results, img[0])
plt.imshow(img_out)

NOTE: You need to initialize texts according to your demand.

wondervictor avatar Apr 19 '24 07:04 wondervictor

Hi @tomgotjack and @XEssence, I can now provide you with a simple demo code. The scripts to run the ONNX demo have not been ready yet.

  1. import libs
import onnx
import onnxruntime as ort
from PIL import Image, ImageOps
import numpy as np
import supervision as sv
import matplotlib.pyplot as plt

BOUNDING_BOX_ANNOTATOR = sv.BoundingBoxAnnotator()
LABEL_ANNOTATOR = sv.LabelAnnotator()
MASK_ANNOTATOR = sv.MaskAnnotator()
  1. load data
def load_image(image_path):
    image = Image.open(image_path).convert('RGB')
    # Get sample input data as a numpy array in a method of your choosing.
    img_width, img_height = image.size
    size = max(img_width, img_height)
    image = ImageOps.pad(image, (size, size), method=Image.BILINEAR)
    image = image.resize((640, 640), Image.BILINEAR)
    tensor_image = np.asarray(image).astype(np.float32)
    tensor_image /= 255.0
    tensor_image = np.expand_dims(tensor_image, axis=0)
    return tensor_image, (img_width, img_height, size)
  1. simple visualization
def visualize(results, img):
    bboxes = results[2][0]
    scores = results[1][0]
    labels = results[0][0]
    bboxes = bboxes[labels >= 0]
    scores = scores[labels >= 0]
    labels = labels[labels >= 0]
    
    print(bboxes.shape)
    detections = sv.Detections(xyxy=bboxes, class_id=labels, confidence=scores)
    labels = [
        f"{texts[class_id][0]} {confidence:0.2f}" for class_id, confidence in
        zip(detections.class_id, detections.confidence)
    ]

    # label images
    image = (img * 255).astype(np.uint8)
    anno_image = image.copy()
    image = BOUNDING_BOX_ANNOTATOR.annotate(image, detections)
    image = LABEL_ANNOTATOR.annotate(image, detections, labels=labels)
    return image
  1. load ONNX runtime model
ort_session = ort.InferenceSession(onnx_file_name,providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
provider_options = ort_session.get_provider_options()
  1. run a sample
img, meta_info = load_image(image_path)
input_ort = ort.OrtValue.ortvalue_from_numpy(img.transpose((0, 3, 1, 2)))
results = ort_session.run(["labels", "scores", "boxes"], {"images": input_ort})
img_out = visualize(results, img[0])
plt.imshow(img_out)

NOTE: You need to initialize texts according to your demand.

我运行了这段代码。出现下面的报错: Traceback (most recent call last):
File "E:\YOLO\YOLO-World\onnxdemo.py", line 61, in
results = ort_session.run(["labels", "scores", "boxes"], {"images": input_ort})
File "D:\miniconda3\envs\yolo\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running NonMaxSuppression node. Name:'/NonMaxSuppression' Status Message: non_max_suppression.cc:87 onnxruntime::NonMaxSuppressionBase::PrepareCompute boxes and scores should have same spatial_dimension. 使用的模型是从huggingface上的demo直接导出的ONNX,请问哪里出了问题?

tomgotjack avatar Apr 19 '24 09:04 tomgotjack

Hi @tomgotjack and @XEssence, I can now provide you with a simple demo code. The scripts to run the ONNX demo have not been ready yet.

  1. import libs
import onnx
import onnxruntime as ort
from PIL import Image, ImageOps
import numpy as np
import supervision as sv
import matplotlib.pyplot as plt

BOUNDING_BOX_ANNOTATOR = sv.BoundingBoxAnnotator()
LABEL_ANNOTATOR = sv.LabelAnnotator()
MASK_ANNOTATOR = sv.MaskAnnotator()
  1. load data
def load_image(image_path):
    image = Image.open(image_path).convert('RGB')
    # Get sample input data as a numpy array in a method of your choosing.
    img_width, img_height = image.size
    size = max(img_width, img_height)
    image = ImageOps.pad(image, (size, size), method=Image.BILINEAR)
    image = image.resize((640, 640), Image.BILINEAR)
    tensor_image = np.asarray(image).astype(np.float32)
    tensor_image /= 255.0
    tensor_image = np.expand_dims(tensor_image, axis=0)
    return tensor_image, (img_width, img_height, size)
  1. simple visualization
def visualize(results, img):
    bboxes = results[2][0]
    scores = results[1][0]
    labels = results[0][0]
    bboxes = bboxes[labels >= 0]
    scores = scores[labels >= 0]
    labels = labels[labels >= 0]
    
    print(bboxes.shape)
    detections = sv.Detections(xyxy=bboxes, class_id=labels, confidence=scores)
    labels = [
        f"{texts[class_id][0]} {confidence:0.2f}" for class_id, confidence in
        zip(detections.class_id, detections.confidence)
    ]

    # label images
    image = (img * 255).astype(np.uint8)
    anno_image = image.copy()
    image = BOUNDING_BOX_ANNOTATOR.annotate(image, detections)
    image = LABEL_ANNOTATOR.annotate(image, detections, labels=labels)
    return image
  1. load ONNX runtime model
ort_session = ort.InferenceSession(onnx_file_name,providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
provider_options = ort_session.get_provider_options()
  1. run a sample
img, meta_info = load_image(image_path)
input_ort = ort.OrtValue.ortvalue_from_numpy(img.transpose((0, 3, 1, 2)))
results = ort_session.run(["labels", "scores", "boxes"], {"images": input_ort})
img_out = visualize(results, img[0])
plt.imshow(img_out)

NOTE: You need to initialize texts according to your demand.

我运行了这段代码。出现下面的报错: Traceback (most recent call last): File "E:\YOLO\YOLO-World\onnxdemo.py", line 61, in results = ort_session.run(["labels", "scores", "boxes"], {"images": input_ort}) File "D:\miniconda3\envs\yolo\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running NonMaxSuppression node. Name:'/NonMaxSuppression' Status Message: non_max_suppression.cc:87 onnxruntime::NonMaxSuppressionBase::PrepareCompute boxes and scores should have same spatial_dimension. 使用的模型是从huggingface上的demo直接导出的ONNX,请问哪里出了问题?

我在本地重新生成了ONNX模型,替换了从huggingface上的demo直接导出的ONNX模型,现在可以正常运行了。今天的huggingface_demo似乎有点问题,无论我输入什么都会报错。所以我之前使用的从huggingface直接导出的ONNX一开始就出错了。

tomgotjack avatar Apr 19 '24 10:04 tomgotjack

Hi @tomgotjack, @XEssence, the official code of ONNX demo has been released. You can check it at deploy/onnx_demo.py

wondervictor avatar Apr 28 '24 08:04 wondervictor