YOLOv8 ONNX Export Fails with Assertion Error
Search before asking
- [x] I have searched the Ultralytics YOLO issues and found no similar bug report.
Ultralytics YOLO Component
No response
Bug
While attempting to export a YOLOv8s model to ONNX format using the Ultralytics CLI, the process crashes with an assertion error from stl_vector.h. The issue seems related to vector indexing during the export process.
console output
Ultralytics 8.3.108 π Python-3.10.12 torch-2.3.0 CPU (Cortex-A78AE) YOLOv8s summary (fused): 72 layers, 11,156,544 parameters, 0 gradients, 28.6 GFLOPs
PyTorch: starting from 'yolov8s.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (21.5 MB)
/opt/rh/gcc-toolset-14/root/usr/include/c++/14/bits/stl_vector.h:1130: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator [with _Tp = unsigned int; _Alloc = std::allocator
Environment
user@saartha-desktop:~/Documents/EZY-INSPECTION/InterisePoc/weights/v1$ yolo checks Ultralytics 8.3.108 π Python-3.10.12 torch-2.3.0 CUDA:0 (Orin, 15656MiB) Setup complete β (4 CPUs, 15.3 GB RAM, 153.3/1875.3 GB disk)
OS Linux-5.15.136-tegra-aarch64-with-glibc2.35 Environment Linux Python 3.10.12 Install pip Path /home/user/.local/lib/python3.10/site-packages/ultralytics RAM 15.29 GB Disk 153.3/1875.3 GB CPU Cortex-A78AE CPU count 4 GPU Orin, 15656MiB GPU count 1 CUDA 12.2
numpy β 1.24.4<=2.1.1,>=1.23.0 matplotlib β 3.5.1>=3.3.0 opencv-python β 4.11.0.86>=4.6.0 pillow β 11.2.1>=7.1.2 pyyaml β 5.4.1>=5.3.1 requests β 2.25.1>=2.23.0 scipy β 1.8.0>=1.4.1 torch β 2.3.0>=1.8.0 torch β 2.3.0!=2.4.0,>=1.8.0; sys_platform == "win32" torchvision β 0.18.0a0+6043bc2>=0.9.0 tqdm β 4.67.1>=4.64.0 psutil β 7.0.0 py-cpuinfo β 9.0.0 pandas β 1.3.5>=1.1.4 seaborn β 0.13.2>=0.11.0 ultralytics-thop β 2.0.14>=2.0.0
Minimal Reproducible Example
yolo export model=yolov8s.pt format=onnx imgsz=640 verbose=True
Additional
No response
Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
π Hello @karthikreddy157, thank you for reporting this and for providing detailed environment information! π This is an automated response to help get your issue addressed as quickly as possible. An Ultralytics engineer will also review and assist you soon.
If this is a π Bug Report, please ensure you've provided a minimum reproducible example (MRE), which helps us identify and debug the problem efficiently. From your description, it looks like you've included the export command and environment detailsβthank you! If there are any additional scripts, custom modifications, or sample data needed to reproduce the error, please share those as well.
For reference, you can find many Python and CLI usage examples in our Docs.
Join the Ultralytics community where it suits you best:
Upgrade
Please ensure you are using the latest ultralytics package and all requirements in a Python>=3.8 environment with PyTorch>=1.8:
pip install -U ultralytics
Environments
YOLO can be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
-
Notebooks with free GPU:
- Google Cloud Deep Learning VM. See GCP Quickstart Guide
- Amazon Deep Learning AMI. See AWS Quickstart Guide
-
Docker Image. See Docker Quickstart Guide
Status
If this badge is green, all Ultralytics CI tests are passing. CI tests verify correct operation of all YOLO Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
Thank you for your patience and your contribution to improving Ultralytics! π οΈ
You could open an issue on NVIDIA Forums since it seems platform specific
I think I encounter a somewhat similar issue :
β― ./bin/yolo export model=yolo11n.pt format=onnx verbose=true
Ultralytics 8.3.128 π Python-3.12.3 torch-2.7.0+cu126 CPU (AMD Ryzen 7 4800H with Radeon Graphics)
YOLO11n summary (fused): 100 layers, 2,616,248 parameters, 0 gradients, 6.5 GFLOPs
PyTorch: starting from 'yolo11n.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (5.4 MB)
free(): double free detected in tcache 2
Aborted (core dumped)
This is what yolo checks returns for me :
β― ./bin/yolo checks
Ultralytics 8.3.128 π Python-3.12.3 torch-2.7.0+cu126 CUDA:0 (NVIDIA GeForce GTX 1650 Ti, 4096MiB)
Setup complete β
(16 CPUs, 15.3 GB RAM, 111.6/1006.9 GB disk)
OS Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Environment Linux
Python 3.12.3
Install pip
Path /home/pierre/myproject/lib/python3.12/site-packages/ultralytics
RAM 15.31 GB
Disk 111.6/1006.9 GB
CPU AMD Ryzen 7 4800H with Radeon Graphics
CPU count 16
GPU NVIDIA GeForce GTX 1650 Ti, 4096MiB
GPU count 1
CUDA 12.6
numpy β
2.2.4>=1.23.0
matplotlib β
3.10.1>=3.3.0
opencv-python β
4.11.0.86>=4.6.0
pillow β
10.4.0>=7.1.2
pyyaml β
6.0.2>=5.3.1
requests β
2.32.3>=2.23.0
scipy β
1.15.2>=1.4.1
torch β
2.7.0>=1.8.0
torch β
2.7.0!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision β
0.22.0>=0.9.0
tqdm β
4.67.1>=4.64.0
psutil β
7.0.0
py-cpuinfo β
9.0.0
pandas β
2.2.3>=1.1.4
seaborn β
0.13.2>=0.11.0
ultralytics-thop β
2.0.14>=2.0.0
Sorted it out by installing onnx packages and forcing the process to run on GPU device=0 :
β― ./bin/pip install onnx onnxslim onnxruntime-gpu
β― ./bin/yolo export model=yolo11n.pt format=onnx verbose=true device=0
Hi @karthikreddy157 and @zedalaye,
It appears you're both experiencing platform-specific ONNX export failures. As @zedalaye discovered, this can be resolved by:
- Installing the required ONNX packages:
pip install onnx onnxslim onnxruntime-gpu
- Forcing GPU execution during export:
yolo export model=yolov8s.pt format=onnx device=0
If you're on the Jetson platform without a discrete GPU, you might need to try with simplify=False flag to avoid the vector assertion error, as the simplification process might be encountering issues with ARM architecture.
For anyone experiencing similar issues, these export failures are typically related to missing dependencies or architecture-specific code paths during the export process.
π Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
- Docs: https://docs.ultralytics.com
- HUB: https://hub.ultralytics.com
- Community: https://community.ultralytics.com
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO π and Vision AI β