onnxruntime
onnxruntime copied to clipboard
[Javascript ] inferenceSession on WebGL
Describe the issue
When i tried to inferenceSession on WebGL , I encountered this error
To reproduce
- Download Yolov8n onnx model here MODEL
- Run this HTML page in a webserver (LiveServer in Visual Studio Code fi): `
<script src="https://cdn.jsdelivr.net/npm/onnxruntime-web/dist/ort.webgl.min.js"></script>
let model = await ort.InferenceSession.create("yolov8n.onnx", { executionProviders: ['webgl'] });
const tensor = new ort.Tensor("float32",new Float32Array(modelInputShape.reduce((a, b) => a * b)),modelInputShape);
await model.run({ images: tensor })
Urgency
Yes , i should solve this error immediately
Platform
Windows
OS Version
10
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.17.1
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
Webgl
Model File
No response
Is this a quantized model?
No
Hi there, WebGL will be deprecated in ORT Web soon. Pls use WebGPU for GPU inference with ORT Web. Here are the doc: https://onnxruntime.ai/docs/tutorials/web/ep-webgpu.html and example: https://github.com/microsoft/onnxruntime-inference-examples/tree/main/js/segment-anything
Thank you very much for your response.
I'm having another problem and I'm struggling to find the material to help me for solving my task. I'm trying to speed up the performance of YOLOv5-segmentation using static quantization. I have followed the ONNX Runtime official tutorial on how to apply static quantization
However, I encountered an error when i tried to preprocess the library
If you know of another material to help me with my task, I would be very grateful to you.
Thank you very much for your response.
I'm having another problem and I'm struggling to find the material to help me for solving my task. I'm trying to speed up the performance of YOLOv5-segmentation using static quantization. I have followed the ONNX Runtime official tutorial on how to apply static quantization
However, I encountered an error when i tried to preprocess the library
If you know of another material to help me with my task, I would be very grateful to you.
you can skip the preprocess to unblock yourself. As for the failure in the shape inference, does your model have non-standard onnx ops?
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.