onnxruntime icon indicating copy to clipboard operation
onnxruntime copied to clipboard

[Javascript ] inferenceSession on WebGL

Open shimaamorsy opened this issue 10 months ago • 5 comments

Describe the issue

When i tried to inferenceSession on WebGL , I encountered this error

webgl

To reproduce

  1. Download Yolov8n onnx model here MODEL
  2. Run this HTML page in a webserver (LiveServer in Visual Studio Code fi): `
<script src="https://cdn.jsdelivr.net/npm/onnxruntime-web/dist/ort.webgl.min.js"></script>

let model =  await ort.InferenceSession.create("yolov8n.onnx", { executionProviders: ['webgl'] });
const tensor = new ort.Tensor("float32",new Float32Array(modelInputShape.reduce((a, b) => a * b)),modelInputShape);
await model.run({ images: tensor })

Urgency

Yes , i should solve this error immediately

Platform

Windows

OS Version

10

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.17.1

ONNX Runtime API

Python

Architecture

X64

Execution Provider

Default CPU

Execution Provider Library Version

Webgl

Model File

No response

Is this a quantized model?

No

shimaamorsy avatar Apr 07 '24 13:04 shimaamorsy

Hi there, WebGL will be deprecated in ORT Web soon. Pls use WebGPU for GPU inference with ORT Web. Here are the doc: https://onnxruntime.ai/docs/tutorials/web/ep-webgpu.html and example: https://github.com/microsoft/onnxruntime-inference-examples/tree/main/js/segment-anything

EmmaNingMS avatar Apr 08 '24 17:04 EmmaNingMS

Thank you very much for your response.

I'm having another problem and I'm struggling to find the material to help me for solving my task. I'm trying to speed up the performance of YOLOv5-segmentation using static quantization. I have followed the ONNX Runtime official tutorial on how to apply static quantization

However, I encountered an error when i tried to preprocess the library image If you know of another material to help me with my task, I would be very grateful to you.

shimaamorsy avatar Apr 08 '24 18:04 shimaamorsy

Thank you very much for your response.

I'm having another problem and I'm struggling to find the material to help me for solving my task. I'm trying to speed up the performance of YOLOv5-segmentation using static quantization. I have followed the ONNX Runtime official tutorial on how to apply static quantization

However, I encountered an error when i tried to preprocess the library image If you know of another material to help me with my task, I would be very grateful to you.

you can skip the preprocess to unblock yourself. As for the failure in the shape inference, does your model have non-standard onnx ops?

yufenglee avatar Apr 11 '24 00:04 yufenglee

Thank you for replying .

No ,

These are the models I am trying to quantize: yolov5n-seg.onnx

yolov8n-seg.onnx

shimaamorsy avatar Apr 13 '24 20:04 shimaamorsy

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

github-actions[bot] avatar May 14 '24 15:05 github-actions[bot]