hub icon indicating copy to clipboard operation
hub copied to clipboard

How to display the key point coordinates of yolov8 detection target

Open yueguang01 opened this issue 1 year ago • 3 comments
trafficstars

Search before asking

Question

How to display the key point coordinates of yolov8 detection target

Additional

No response

yueguang01 avatar Mar 29 '24 03:03 yueguang01

👋 Hello @yueguang01, thank you for raising an issue about Ultralytics HUB 🚀! Please visit our HUB Docs to learn more:

  • Quickstart. Start training and deploying YOLO models with HUB in seconds.
  • Datasets: Preparing and Uploading. Learn how to prepare and upload your datasets to HUB in YOLO format.
  • Projects: Creating and Managing. Group your models into projects for improved organization.
  • Models: Training and Exporting. Train YOLOv5 and YOLOv8 models on your custom datasets and export them to various formats for deployment.
  • Integrations. Explore different integration options for your trained models, such as TensorFlow, ONNX, OpenVINO, CoreML, and PaddlePaddle.
  • Ultralytics HUB App. Learn about the Ultralytics App for iOS and Android, which allows you to run models directly on your mobile device.
    • iOS. Learn about YOLO CoreML models accelerated on Apple's Neural Engine on iPhones and iPads.
    • Android. Explore TFLite acceleration on mobile devices.
  • Inference API. Understand how to use the Inference API for running your trained models in the cloud to generate predictions.

If this is a 🐛 Bug Report, please provide screenshots and steps to reproduce your problem to help us get started working on a fix.

If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response.

We try to respond to all issues as promptly as possible. Thank you for your patience!

github-actions[bot] avatar Mar 29 '24 03:03 github-actions[bot]

@yueguang01 hi there! 🎉

To display key point coordinates of a detection target with YOLOv8, after processing your image or video with the model, you can access the key point information directly from the detection results. The results object contains detected classes, bounding boxes, and key points when applicable. For each detection, key points are stored as part of the object attributes, typically in an array or similar structure, where each key point is represented by its (x, y) coordinates.

Please refer to the "Handling Results" section in the Ultralytics HUB Docs for a detailed explanation on accessing and manipulating these detection results. Keep in mind that the exact method might vary slightly based on how you're implementing the YOLOv8 model (e.g., Python script, Jupyter notebook).

Happy detecting! 😊

UltralyticsAssistant avatar Mar 29 '24 08:03 UltralyticsAssistant

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

  • Docs: https://docs.ultralytics.com
  • HUB: https://hub.ultralytics.com
  • Community: https://community.ultralytics.com

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

github-actions[bot] avatar Apr 29 '24 00:04 github-actions[bot]