Shimaa Morsey

Results 8 comments of Shimaa Morsey

Thank you very much for your response. I'm having another problem and I'm struggling to find the material to help me for solving my task. I'm trying to speed up...

Thank you for replying . No , These are the models I am trying to quantize: [yolov5n-seg.onnx](https://drive.google.com/file/d/1eIYvg6Q2BeHgjVZFNVwlSMzhLW4dzu8F/view?usp=sharing) [yolov8n-seg.onnx](https://drive.google.com/file/d/1HKX19wLddKryt3BKc4VUCpDCmAboSy6b/view?usp=sharing)

Firstly, I'd like to express my gratitude for your response. Initially, I exported [yolov8-seg.onnx.pt to the ONNX format](https://docs.ultralytics.com/tasks/segment/#export) : `import ultralytics model = YOLO('yolov8n-seg.pt') model.export(format='onnx')` Following that, I attempted to...

Thank you for replying . I updated the onnxruntime, but unfortunately, the same error still appears. These are the models I am trying to quantize: [yolov5n-seg.onnx ](https://drive.google.com/file/d/1eIYvg6Q2BeHgjVZFNVwlSMzhLW4dzu8F/view?usp=sharing) [yolov8n-seg.onnx](https://drive.google.com/file/d/1HKX19wLddKryt3BKc4VUCpDCmAboSy6b/view?usp=sharing)

When I tried, I encountered this error ``` let model = await ort.InferenceSession.create("yolov8n.onnx",{ backendHint: 'webgl' }); const tensor = new ort.Tensor("float32",new Float32Array(modelInputShape.reduce((a, b) => a * b)),modelInputShape); await model.run({ images:...

And i have another question What do I do if I want to apply quantization to "yolov8n.onnx" ?

Hi @xadupre, I exported my model using the following code: `model.export(format='onnx', dynamic=True, simplify=True, opset=12)` However, the exported ONNX model currently runs on the GPU only when the batch size is...

Hello @xadupre, I hope you're doing well. Could you please assist me with resolving my issue? I've been facing challenges running YOLOv8-seg.onnx with dynamic batch sizes on GPU using ONNX...