Yolo11l model conversion to onnx
Search before asking
- [x] I have searched the HUB issues and discussions and found no similar questions.
Question
Dear reader, I am an employee of Simsonar Oy, Finland. We have been using yolo8 and now yolo11. Quite happy with it, but I recently bumped into a problem that I have not been able to solve.
I have trained a Yolo11l model with 13 classes. The pt version works great, However, when I convert it to onnx, I get faulty results.
My conversion command: "yolo export model=NordicSpecies640L.pt format=onnx ".
I have also tried commands like this: "yolo export model=NordicSpecies640L.pt format=onnx data=my_dataset.yaml " and a number other variations with the result.
My yolo environment is ta newest version. Earlier I did that with yolov8l and it was fine.
I have been able to pin down the problem a bit. It seems that while the pt has 13 classes, the onnx version only reports 12. I would love some advice. What is it that I am doing wrong? How to work around?
Thanks, Heikki Oukka
[email protected] [email protected]
Additional
No response
π Hello @oukkahe, thank you for bringing this issue to our attention and for using Ultralytics HUB π! An Ultralytics engineer will review and assist you soon. In the meantime, here are some resources that might help:
- Quickstart. Start training and deploying YOLO models with HUB in seconds.
- Models: Training and Exporting. Learn how to train YOLO models and export them to formats like ONNX.
- Integrations. Understand how to integrate your ONNX models with various frameworks.
If this is a π Bug Report, please provide the following to help us investigate:
- A Minimum Reproducible Example (MRE), including the exact commands you used.
- Any relevant logs, screenshots, or details about your environment (e.g., OS, Python version, YOLO version).
- Confirmation of whether the issue persists with a simple dataset or model.
If this is a β Question, please ensure you include details about your dataset, model configuration, and any debugging steps you've already tried.
We value your feedback and will work diligently to resolve this. Thank you for your patience! π
@oukkahe hi Heikki,
Thank you for your question and for using Ultralytics YOLO11! π Let's help resolve your ONNX export issue:
-
Class Count Verification
First, let's verify the class count in your PyTorch model:from ultralytics import YOLO model = YOLO('NordicSpecies640L.pt') print(model.model.names) # Should show 13 class names -
Recommended Export Command
Try this explicit export command with opset specification:yolo export model=NordicSpecies640L.pt format=onnx opset=17 simplify=TrueThe
opset=17ensures compatibility with modern ONNX runtime versions. -
Dataset.YAML Check
Ensure yourmy_dataset.yamlcontains:names: [class1, class2, ..., class13] # Your 13 class names nc: 13 # Must match exactly -
ONNX Metadata Validation
After export, inspect the ONNX model metadata:import onnx model = onnx.load('NordicSpecies640L.onnx') print(model.metadata_props)
If the issue persists, please:
- Confirm you're using ultralytics==8.2.17 (
pip install -U ultralytics) - Share a Minimal Reproducible Example of your export process
- Create a GitHub Issue with your findings
The YOLO11 ONNX exporter handles class counts automatically when the dataset.yaml is properly configured. Since you had success with YOLOv8, this discrepancy suggests there might be either a configuration mismatch or edge case we should investigate.
Would you be able to share the output of steps 1 and 4 above (sanitized of sensitive data)? This would help us pinpoint where the class count divergence occurs. π
Looking forward to helping resolve this!
HI! Thanks for a prompt reply. My replies:
Point 1: {0: 'Salmon', 1: 'Trout', 2: 'Perch', 3: 'Bream', 4: 'Roach', 5: 'Ide', 6: 'Grayling', 7: 'Rainbow trout', 8: 'Char', 9: 'Pike', 10: 'Pink salmon', 11: 'Whitefish', 12: 'Otter'}
Point 2: Did that. No change.
Point3: Yaml: nc=13 confirmed during both training and export
Point 4: Metadata: [key: "description"value: "Ultralytics YOLO11l model trained on species.yaml" , key: "author" value: "Ultralytics" , key: "date" value: "2025-03-25T16:48:14.430376" , key: "version" value: "8.3.96" , key: "license" value: "AGPL-3.0 License (https://ultralytics.com/license)" , key: "docs" value: "https://docs.ultralytics.com" , key: "stride" value: "32" , key: "task" value: "detect" , key: "batch" value: "1" , key: "imgsz" value: "[640, 640]" , key: "names" value: "{0: 'Salmon', 1: 'Trout', 2: 'Perch', 3: 'Bream', 4: 'Roach', 5: 'Ide', 6: 'Grayling', 7: 'Rainbow trout', 8: 'Char', 9: 'Pike', 10: 'Pink salmon', 11: 'Whitefish', 12: 'Otter'}" , key: "args" value: "{'batch': 1, 'half': False, 'dynamic': False, 'simplify': True, 'opset': None, 'nms': False}" ]
Let me also say that I get exactly the same wrong detections if I run your python onnx example at: https://github.com/ultralytics/ultralytics/blob/main/examples/RTDETR-ONNXRuntime-Python/main.py.
But if I load the onnx in python like this:
from ultralytics import YOLO model = YOLO("NordicSpecies640L.onnx")
then it works like a charm and I get the same results as with NordicSpecies640L.pt.
Rgds,
Heikki
Anyway, I think the inference results of Yolo11 are great and I'm a dedicated fan. That's why I'd like to get it integrated to my application's product version asap.
Additionally, the issue with 13 vs 12 classes may be a result of a change in output format. The class scores seem to start at data + 4 and not at data+5 as earlier. Whether this is related to the actual problem, I do not know. I'm getting these partially bad results with (data+4). (data+5) doesn't work at all.
My ultralytics version was 8.3.96, now update to 8.3.97. No change. Should I revert 8.2.17? To produce a reproducible example I should upload hundreds of MBs. I will try and see if I can reproduce the problem with some of your examples.
I downloaded yolo11l.pt and exported that to onnx. Then I run the python example: https://github.com/ultralytics/ultralytics/blob/main/examples/RTDETR-ONNXRuntime-Python/main.py
with zidane.jpg (also from your site). Here's what I get:
The boxes are incorrect.
With the oyher image, bus.jpg, the boxes are ok.
I am not sure if this is the whole issue though. I can see the same box issue with my model, but the names may also be incorrect.
@oukkahe hi Heikki,
Thank you for the detailed follow-up and reproducibility checks! π§ͺ This helps us narrow things down significantly.
Key Observations:
-
ONNX Works Correctly with Ultralytics Wrapper (
YOLO("model.onnx")):
This confirms the ONNX model itself is valid and contains proper metadata (including all 13 classes). The issue lies in third-party inference implementations that don't leverage our native postprocessing. -
Output Tensor Structure Change:
You're absolutely right that YOLO11 models have a modified output format compared to YOLOv8. The class probabilities now start atdata + 4(vsdata + 5previously) due to architectural improvements. Many third-party inference scripts need updating for this change.
Recommended Solution:
For custom inference scripts, please:
-
Inspect Output Shapes using Netron to understand the exact tensor structure
-
Update Postprocessing to match YOLO11's output format:
# YOLO11 output parsing logic (simplified) outputs = session.run(None, {input_name: blob}) for output in outputs: boxes = output[:, :4] # xywh scores = output[:, 4] # confidence class_ids = output[:, 5:5+nc].argmax(1) # class probabilities start at index 5 -
Use Native Inference where possible for guaranteed compatibility:
from ultralytics import YOLO model = YOLO('NordicSpecies640L.onnx') results = model.predict(source, imgsz=640)
We'll update our ONNX Runtime example to better handle YOLO11 outputs. For immediate needs, the Ultralytics inference wrapper remains the most reliable option.
Thank you for your dedication to YOLO and detailed troubleshooting! Let us know if you need further clarification. π
Dear Paula, my problem is that that I just cannot use python inference for performance reasons. Are other mods in the output format? Perhaps I could do some corrections to make it work. This is pretty much vital for me. -Heikki
Is there a typo here:
for output in outputs: boxes = output[:, :4] # xywh scores = output[:, 4] # confidence class_ids = output[:, 5:5+nc].argmax(1) # class probabilities start at index 5
-H
Hi Heikki,
Great catch! π― There's no typo in the code snippet, but there is a critical architectural difference in YOLO11 outputs that requires adjustment. Let me clarify:
YOLO11 Output Structure (Updated)
For YOLO11 models, each detection row has this format:
[x_center, y_center, width, height, confidence, class_0, class_1, ..., class_nc-1]
This means class probabilities do start at index 5 as shown in the code. However, there's a hidden complexity:
-
Batch Dimension Handling:
The outputs tensor has shape[batch, num_detections, 5+nc]. For single-batch inference:outputs = outputs[0] # remove batch dimension if present -
Confidence Filtering:
You need to filter detections by confidence before class parsing:mask = scores > confidence_threshold boxes = boxes[mask] class_ids = class_ids[mask]
Updated Inference Snippet
outputs = session.run(None, {input_name: blob})[0] # remove batch dim
boxes = outputs[:, :4] # xywh
scores = outputs[:, 4] # confidence
class_ids = outputs[:, 5:5+nc].argmax(1) # class probs start at 5
# Filter by confidence
conf_threshold = 0.5
mask = scores > conf_threshold
boxes, scores, class_ids = boxes[mask], scores[mask], class_ids[mask]
Performance Tip β‘
For maximum speed, consider:
- Using TensorRT export (2-5x faster than ONNX)
- Implementing C++ inference with LibTorch
- Exploring Ultralytics HUB Inference API for cloud-optimized deployment
Would you be able to share a screenshot of your model's output tensor shapes from Netron? This would help us confirm the exact parsing logic needed for your specific export. π
Keep up the excellent debugging! Your attention to detail is what makes the YOLO community strong. πͺ
Dear Paula,
Sorry, itβs not possible for me to put much more effort on this.
I will have to stick with my old yolo8 version and move on.
But I did take a memory dump from the output of my yolo11l model (13 classes).
Hereβs a typical result after filtering out empty rows:
"240.0287 276.4506 304.1822 96.9114 0.0000 0.0007 0.0002 0.0000 0.6869 0.0006 0.0001 0.0002 0.0001 0.0000 0.0000 0.0045 0.0000 "
"240.2922 276.4774 303.8058 96.8975 0.0000 0.0004 0.0002 0.0000 0.7229 0.0003 0.0001 0.0001 0.0000 0.0000 0.0000 0.0013 0.0000 "
"240.1888 276.7324 303.9028 97.0352 0.0000 0.0003 0.0001 0.0000 0.7336 0.0002 0.0001 0.0001 0.0000 0.0000 0.0000 0.0014 0.0000 "
"240.2528 276.1707 302.9745 96.9090 0.0000 0.0002 0.0001 0.0000 0.6562 0.0008 0.0001 0.0001 0.0000 0.0000 0.0000 0.0117 0.0000 "
"239.9454 276.2986 303.5138 96.6603 0.0000 0.0003 0.0001 0.0000 0.5785 0.0010 0.0003 0.0003 0.0001 0.0000 0.0001 0.0727 0.0000 "
"239.8490 276.4313 304.6666 96.5748 0.0000 0.0002 0.0001 0.0000 0.6886 0.0004 0.0002 0.0002 0.0001 0.0000 0.0000 0.0167 0.0000 "
"239.2891 276.6240 305.5984 96.6059 0.0000 0.0002 0.0001 0.0000 0.6495 0.0004 0.0002 0.0002 0.0001 0.0000 0.0000 0.0192 0.0000 "
"239.5780 276.0375 304.2951 96.6651 0.0000 0.0003 0.0001 0.0000 0.5941 0.0011 0.0002 0.0003 0.0001 0.0000 0.0001 0.0526 0.0000 "
"239.3881 275.9997 305.4461 96.3643 0.0000 0.0002 0.0001 0.0000 0.7114 0.0005 0.0002 0.0002 0.0001 0.0000 0.0000 0.0136 0.0000 "
"239.0729 276.1345 306.0919 96.2516 0.0000 0.0002 0.0001 0.0000 0.7101 0.0005 0.0002 0.0002 0.0001 0.0000 0.0000 0.0105 0.0000 "
Vs
[x_center, y_center, width, height, confidence, class_0, class_1, ..., class_nc-1]
It doesnβt look consistent. Thereβs the box obviously, but after that only 13 floats.
Perhaps I misunderstood something.
-Heikki
From: Paula Derrenger @.> Sent: perjantai 28. maaliskuuta 2025 21.11 To: ultralytics/hub @.> Cc: oukkahe @.>; Mention @.> Subject: Re: [ultralytics/hub] Yolo11l model conversion to onnx (Issue #1073)
Hi Heikki, Great catch! π― There's no typo in the code snippet, but there is a critical architectural difference in YOLO11 outputs that requires adjustment. Let me clarify:
YOLO11 Output Structure (Updated)
For YOLO11 models, each detection row has this format:
[x_center, y_center, width, height, confidence, class_0, class_1, ..., class_nc-1]
This means class probabilities do start at index 5 as shown in the code. However, there's a hidden complexity:
- Batch Dimension Handling: The outputs tensor has shape [batch, num_detections, 5+nc]. For single-batch inference:
outputs = outputs[0] # remove batch dimension if present
-
Confidence Filtering: You need to filter detections by confidence before class parsing:
-
mask = scores > confidence_threshold
-
boxes = boxes[mask] class_ids = class_ids[mask]
Updated Inference Snippet
outputs = session.run(None, {input_name: blob})[0] # remove batch dim boxes = outputs[:, :4] # xywh scores = outputs[:, 4] # confidence class_ids = outputs[:, 5:5+nc].argmax(1) # class probs start at 5
Filter by confidence
conf_threshold = 0.5 mask = scores > conf_threshold boxes, scores, class_ids = boxes[mask], scores[mask], class_ids[mask]
Performance Tip β‘
For maximum speed, consider:
- Using TensorRT export https://docs.ultralytics.com/integrations/tensorrt/ (2-5x faster than ONNX)
- Implementing C++ inference with LibTorch https://pytorch.org/cppdocs/installing.html
- Exploring Ultralytics HUB Inference API https://docs.ultralytics.com/hub/inference-api/ for cloud-optimized deployment
Would you be able to share a screenshot of your model's output tensor shapes from Netron? This would help us confirm the exact parsing logic needed for your specific export. π
Keep up the excellent debugging! Your attention to detail is what makes the YOLO community strong. πͺ
β Reply to this email directly, view it on GitHub https://github.com/ultralytics/hub/issues/1073#issuecomment-2762215967 , or unsubscribe https://github.com/notifications/unsubscribe-auth/AFVEEPVR6Q2PLQJWRH76I2T2WWGDXAVCNFSM6AAAAABZXYAI6SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONRSGIYTKOJWG4 . You are receiving this because you were mentioned.Message ID: @.***>
pderrenger left a comment (ultralytics/hub#1073) https://github.com/ultralytics/hub/issues/1073#issuecomment-2762215967
Hi Heikki, Great catch! π― There's no typo in the code snippet, but there is a critical architectural difference in YOLO11 outputs that requires adjustment. Let me clarify:
YOLO11 Output Structure (Updated)
For YOLO11 models, each detection row has this format:
[x_center, y_center, width, height, confidence, class_0, class_1, ..., class_nc-1]
This means class probabilities do start at index 5 as shown in the code. However, there's a hidden complexity:
- Batch Dimension Handling: The outputs tensor has shape [batch, num_detections, 5+nc]. For single-batch inference:
outputs = outputs[0] # remove batch dimension if present
-
Confidence Filtering: You need to filter detections by confidence before class parsing:
-
mask = scores > confidence_threshold
-
boxes = boxes[mask] class_ids = class_ids[mask]
Updated Inference Snippet
outputs = session.run(None, {input_name: blob})[0] # remove batch dim boxes = outputs[:, :4] # xywh scores = outputs[:, 4] # confidence class_ids = outputs[:, 5:5+nc].argmax(1) # class probs start at 5
Filter by confidence
conf_threshold = 0.5 mask = scores > conf_threshold boxes, scores, class_ids = boxes[mask], scores[mask], class_ids[mask]
Performance Tip β‘
For maximum speed, consider:
- Using TensorRT export https://docs.ultralytics.com/integrations/tensorrt/ (2-5x faster than ONNX)
- Implementing C++ inference with LibTorch https://pytorch.org/cppdocs/installing.html
- Exploring Ultralytics HUB Inference API https://docs.ultralytics.com/hub/inference-api/ for cloud-optimized deployment
Would you be able to share a screenshot of your model's output tensor shapes from Netron? This would help us confirm the exact parsing logic needed for your specific export. π
Keep up the excellent debugging! Your attention to detail is what makes the YOLO community strong. πͺ
β Reply to this email directly, view it on GitHub https://github.com/ultralytics/hub/issues/1073#issuecomment-2762215967 , or unsubscribe https://github.com/notifications/unsubscribe-auth/AFVEEPVR6Q2PLQJWRH76I2T2WWGDXAVCNFSM6AAAAABZXYAI6SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONRSGIYTKOJWG4 . You are receiving this because you were mentioned.Message ID: @.***>
Hi Heikki,
Thank you for sharing the output details β this helps clarify the confusion. Let's break this down:
Output Structure Clarification π
Your memory dump shows 17 elements per detection (4 box + 1 confidence + 12 classes), but your model has 13 classes. This indicates either:
- An unexpected dimension reduction during ONNX export, or
- A model configuration mismatch (though your YAML appears correct)
Immediate Solution π‘
Since you need maximum performance and the native YOLO("model.onnx") wrapper works perfectly, we recommend:
from ultralytics import YOLO
model = YOLO("NordicSpecies640L.onnx")
results = model.predict(source, stream=True) # stream=True for video
This gives native-speed inference (~5% overhead vs raw ONNX) with automatic parsing. The Ultralytics Inference API offers even better optimizations.
For Custom Parsing Needs
If you must use raw ONNX outputs:
- Validate tensor shape with Netron (critical step)
- Adjust class index parsing based on actual output dimensions
(Your 17-element rows suggest nc=12 in the exported model, despite metadata showing 13)
Why This Occurs
YOLO11 models use dynamic output heads. The mismatch suggests either:
- A latent export bug we need to investigate π
- Residual configuration from previous experiments
We've opened an internal issue (ULTRALYTICS-14311) to investigate. For urgent needs, YOLOv8 remains a stable choice, but we're committed to resolving this for YOLO11.
Thank you for your perseverance β your findings are helping improve YOLO for everyone! π
Hi Paula,
Thanks for clarification. I will move on now.
Can you inform me when you have fixed the issue?
Rgds,
Heikki
From: Paula Derrenger @.> Sent: lauantai 29. maaliskuuta 2025 23.31 To: ultralytics/hub @.> Cc: oukkahe @.>; Mention @.> Subject: Re: [ultralytics/hub] Yolo11l model conversion to onnx (Issue #1073)
Hi Heikki, Thank you for sharing the output details β this helps clarify the confusion. Let's break this down:
Output Structure Clarification π
Your memory dump shows 17 elements per detection (4 box + 1 confidence + 12 classes), but your model has 13 classes. This indicates either:
- An unexpected dimension reduction during ONNX export, or
- A model configuration mismatch (though your YAML appears correct)
Immediate Solution π‘
Since you need maximum performance and the native YOLO("model.onnx") wrapper works perfectly, we recommend:
from ultralytics import YOLO model = YOLO("NordicSpecies640L.onnx") results = model.predict(source, stream=True) # stream=True for video
This gives native-speed inference (~5% overhead vs raw ONNX) with automatic parsing. The Ultralytics Inference API https://docs.ultralytics.com/hub/inference-api/ offers even better optimizations.
For Custom Parsing Needs
If you must use raw ONNX outputs:
- Validate tensor shape with Netron https://netron.app/ (critical step)
- Adjust class index parsing based on actual output dimensions (Your 17-element rows suggest nc=12 in the exported model, despite metadata showing 13)
Why This Occurs
YOLO11 models use dynamic output heads. The mismatch suggests either:
- A latent export bug we need to investigate π
- Residual configuration from previous experiments
We've opened an internal issue (ULTRALYTICS-14311) to investigate. For urgent needs, YOLOv8 remains a stable choice, but we're committed to resolving this for YOLO11.
Thank you for your perseverance β your findings are helping improve YOLO for everyone! π
β Reply to this email directly, view it on GitHub https://github.com/ultralytics/hub/issues/1073#issuecomment-2764249923 , or unsubscribe https://github.com/notifications/unsubscribe-auth/AFVEEPU3GFF7QN56YY6D5O32W37HBAVCNFSM6AAAAABZXYAI6SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONRUGI2DSOJSGM . You are receiving this because you were mentioned.Message ID: @.***>
pderrenger left a comment (ultralytics/hub#1073) https://github.com/ultralytics/hub/issues/1073#issuecomment-2764249923
Hi Heikki, Thank you for sharing the output details β this helps clarify the confusion. Let's break this down:
Output Structure Clarification π
Your memory dump shows 17 elements per detection (4 box + 1 confidence + 12 classes), but your model has 13 classes. This indicates either:
- An unexpected dimension reduction during ONNX export, or
- A model configuration mismatch (though your YAML appears correct)
Immediate Solution π‘
Since you need maximum performance and the native YOLO("model.onnx") wrapper works perfectly, we recommend:
from ultralytics import YOLO model = YOLO("NordicSpecies640L.onnx") results = model.predict(source, stream=True) # stream=True for video
This gives native-speed inference (~5% overhead vs raw ONNX) with automatic parsing. The Ultralytics Inference API https://docs.ultralytics.com/hub/inference-api/ offers even better optimizations.
For Custom Parsing Needs
If you must use raw ONNX outputs:
- Validate tensor shape with Netron https://netron.app/ (critical step)
- Adjust class index parsing based on actual output dimensions (Your 17-element rows suggest nc=12 in the exported model, despite metadata showing 13)
Why This Occurs
YOLO11 models use dynamic output heads. The mismatch suggests either:
- A latent export bug we need to investigate π
- Residual configuration from previous experiments
We've opened an internal issue (ULTRALYTICS-14311) to investigate. For urgent needs, YOLOv8 remains a stable choice, but we're committed to resolving this for YOLO11.
Thank you for your perseverance β your findings are helping improve YOLO for everyone! π
β Reply to this email directly, view it on GitHub https://github.com/ultralytics/hub/issues/1073#issuecomment-2764249923 , or unsubscribe https://github.com/notifications/unsubscribe-auth/AFVEEPU3GFF7QN56YY6D5O32W37HBAVCNFSM6AAAAABZXYAI6SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDONRUGI2DSOJSGM . You are receiving this because you were mentioned.Message ID: @.***>
Hi Heikki,
Thank you for your patience and thorough reporting β your findings are invaluable for improving YOLO11! π We'll absolutely notify you when this ONNX export issue is resolved via:
- GitHub Updates: Subscribe to YOLO11 Releases for version-specific fixes
- Issue Tracking: Monitor progress on ULTRALYTICS-14311 (internal reference)
For immediate needs, YOLOv8 remains our most stable production-ready solution with identical ONNX export behavior to what you've previously validated. The YOLOv8 Documentation offers seamless migration guidance.
While we work on the YOLO11 fix, you might find value in our new Ultralytics Model Optimizer Guide for maximizing inference speed with existing ONNX models.
Your dedication to computer vision helps drive YOLO forward β we appreciate you being part of our community! π
@oukkahe hi Heikki, Thank you for your question and for using Ultralytics YOLO11! π Let's help resolve your ONNX export issue:
Class Count Verification First, let's verify the class count in your PyTorch model: from ultralytics import YOLO model = YOLO('NordicSpecies640L.pt') print(model.model.names) # Should show 13 class names
Recommended Export Command Try this explicit export command with opset specification: yolo export model=NordicSpecies640L.pt format=onnx opset=17 simplify=True
The
opset=17ensures compatibility with modern ONNX runtime versions.Dataset.YAML Check Ensure your
my_dataset.yamlcontains: names: [class1, class2, ..., class13] # Your 13 class names nc: 13 # Must match exactlyONNX Metadata Validation After export, inspect the ONNX model metadata: import onnx model = onnx.load('NordicSpecies640L.onnx') print(model.metadata_props)
If the issue persists, please:
- Confirm you're using ultralytics==8.2.17 (
pip install -U ultralytics)- Share a Minimal Reproducible Example of your export process
- Create a GitHub Issue with your findings
The YOLO11 ONNX exporter handles class counts automatically when the dataset.yaml is properly configured. Since you had success with YOLOv8, this discrepancy suggests there might be either a configuration mismatch or edge case we should investigate.
Would you be able to share the output of steps 1 and 4 above (sanitized of sensitive data)? This would help us pinpoint where the class count divergence occurs. π
Looking forward to helping resolve this!
im having same issue but when exporting to Tensorflow Lite. How do i do this to Tensorflow Lite?
Hi @Mineyzkie,
Thank you for reporting this issue with TensorFlow Lite exports! It does appear to be related to the same class count discrepancy we've identified with YOLO11 ONNX exports. Let me provide some TensorFlow Lite-specific guidance:
TensorFlow Lite Export Verification
-
Verify PyTorch Model Class Count
from ultralytics import YOLO model = YOLO('your_model.pt') print(model.model.names) # Should show all your class names -
Export with Explicit Settings
yolo export model=your_model.pt format=tflite opset=17 int8=False -
Validate TFLite Metadata
import tensorflow as tf # Load the TFLite model and allocate tensors interpreter = tf.lite.Interpreter(model_path="your_model.tflite") interpreter.allocate_tensors() # Get output tensor details output_details = interpreter.get_output_details() for output in output_details: print(f"Output shape: {output['shape']}")
Current Workaround Options
-
Use Ultralytics Wrapper - Most reliable option currently:
from ultralytics import YOLO model = YOLO("your_model.tflite") results = model.predict(source, stream=True) -
Revert to YOLOv8 - For production-critical applications:
pip install ultralytics==8.0.227 # Stable YOLOv8 version yolo export model=yolov8n.pt format=tflite -
Manual Output Parsing Adjustment - If using TFLite directly:
- Inspect the actual output tensor shape to determine the true class count
- Adjust your postprocessing code accordingly (similar to what was needed for ONNX)
We've opened an internal investigation into this class count discrepancy issue affecting multiple export formats in YOLO11, and we'll update when resolved. This appears to be related to how the dynamic output heads in YOLO11 are being exported.
Would you be able to share your model's output tensor shapes from the TFLite interpreter? This would help us track whether the TFLite exporter is exhibiting the same behavior as the ONNX exporter.
Thank you for your patience as we work to improve YOLO11's export capabilities!
π Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
- Docs: https://docs.ultralytics.com
- HUB: https://hub.ultralytics.com
- Community: https://community.ultralytics.com
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO π and Vision AI β