ultralytics
ultralytics copied to clipboard
imx export pose estimation support
🛠️ PR Summary
Made with ❤️ by Ultralytics Actions
🌟 Summary
This update adds support for exporting pose estimation models to the IMX format, alongside existing detection model support, improving compatibility and flexibility for users working with IMX hardware. 🕺🤖
📊 Key Changes
- IMX export now supports both detection and pose estimation models, not just detection.
- Added specific handling and configuration for pose models during IMX export, including layer names, memory usage, and layer counts.
- Enhanced the Non-Maximum Suppression (NMS) logic to process pose outputs correctly, extracting both bounding boxes and keypoints.
- Updated model output formatting for pose tasks to ensure correct export and inference on IMX devices.
- Adjusted the forward pass in the model head to handle IMX-specific pose output formatting.
🎯 Purpose & Impact
- Broader Hardware Support: Users can now export pose estimation models to IMX, enabling deployment on a wider range of devices. 💪
- Improved Usability: Streamlines the workflow for those working with both detection and pose tasks, reducing manual intervention.
- Enhanced Performance: Ensures pose models are optimized and run efficiently on IMX hardware, with correct output handling for keypoints.
- Future-Proofing: Lays the groundwork for supporting more tasks and models in the IMX export pipeline.
Overall, this update makes it easier for users to deploy both detection and pose models on IMX devices, expanding the versatility of Ultralytics solutions. 🚀
👋 Hello @ambitious-octopus, thank you for submitting an ultralytics/ultralytics 🚀 PR! This is an automated response to help streamline the review process. An Ultralytics engineer will also review and assist you soon.
To ensure a smooth integration of your contribution, please review this checklist:
- ✅ Define a Purpose: Clearly explain the purpose of your fix or feature in your PR description, and link to any relevant issues. Ensure your commit messages are clear, concise, and follow the project's conventions.
- ✅ Synchronize with Source: Confirm your PR is synchronized with the
ultralytics/ultralyticsmainbranch. If it's behind, update it by clicking the 'Update branch' button or by runninggit pullandgit merge mainlocally. - ✅ Ensure CI Checks Pass: Verify all Ultralytics Continuous Integration (CI) checks are passing. If any checks fail, please address them.
- ✅ Update Documentation: Update the relevant documentation for any new or modified features.
- ✅ Add Tests: If applicable, include or update tests to cover your changes, and confirm that all tests are passing.
- ✅ Sign the CLA: Please ensure you have signed our Contributor License Agreement if this is your first Ultralytics PR by writing "I have read the CLA Document and I sign the CLA" in a new message.
- ✅ Minimize Changes: Limit your changes to the minimum necessary for your bug fix or feature addition. "It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is." — Bruce Lee
For more guidance, please refer to our Contributing Guide. If you have any questions, feel free to leave a comment. Thank you for helping improve Ultralytics! 🌟🤖
Codecov Report
Attention: Patch coverage is 7.50000% with 37 lines in your changes missing coverage. Please review.
:loudspeaker: Thoughts on this report? Let us know!
@ambitious-octopus @lakshanthad ok guys, I've further simplified the code a bit and tested both detection/pose validation on coco8/coco8-pose.yaml and both works properly to me.
@ambitious-octopus @lakshanthad ok as discussed I've reverted the simplify changes I made here: https://github.com/ultralytics/ultralytics/pull/20196/commits/876f1ba8a7d446a6c151f553080a1e837e2cf52d going to merge this one. And let's revisit this part in the future if needed.
🎉 Fantastic work, team! This PR marks a major milestone—bringing seamless pose estimation to Sony IMX500 devices and making Ultralytics models even more versatile at the edge. As Henry Ford said, “Coming together is a beginning; keeping together is progress; working together is success.” Your collaboration—@ambitious-octopus, @glenn-jocher, @Laughing-q, @lakshanthad, and @itai-berman—truly exemplifies this spirit. The expanded documentation, improved workflows, and broader model support will empower our community and open new doors for edge AI innovation. Thank you for your dedication and expertise! 🚀