hub
hub copied to clipboard
Changing conf parameter Inference API code has no effect on results
Search before asking
- [X] I have searched the HUB issues and found no similar bug report.
HUB Component
Inference
Bug
I am opening this GitHub Issue (Bug) at the suggestion of pderrenger in Issue (Question) #893. The issue being raised here was raised in Issue #893 about 4 days ago. The example of the issue concerns classification model inference results that are the same for both "conf": 0.25 and 0.90. The model used in this test is YOLO11n Classify (cm_v11n_100epoch-640imgsz_20241027LatlPhoto); it is using the Ultralytics HUB.
I am attaching two screenshots: Python code w/ conf=0.25.png and Python code w/ conf=0.90.png. Both show the same results being returned with the response in both cases having confidence = 0.61369.
Environment
- Computer: MacBook Pro (2023)
- OS: 15.0.1
- Browser: Firefox 131.0.3
Minimal Reproducible Example
Example is shown in above screenshots.
Additional
No response
👋 Hello @curtinmjc, thank you for bringing this issue to our attention regarding the Ultralytics HUB Inference API 🚀! We're here to help ensure everything runs smoothly. Our HUB documentation offers comprehensive guides and insights, which you might find useful:
- Quickstart: Get started quickly with training and deploying YOLO models.
- Inference API: Delve into the specifics of using the Inference API for running your trained models and generating predictions.
Based on your description, it sounds like you've encountered a potential 🐛 bug. To assist our engineering team further, could you please ensure that your bug report includes a detailed minimum reproducible example? This helps us replicate the issue on our side.
For debugging purposes, confirming the following would be beneficial:
- Double-check the API request structure and ensure it adheres to the expected format.
- Provide any relevant logs or additional error messages.
Rest assured, an Ultralytics engineer will review your issue soon to assist you further. Thank you for your patience and for helping us improve the Ultralytics HUB! 😊
@curtinmjc hello!
Thank you for bringing this to our attention. It seems like you're experiencing an issue where changing the conf parameter in the Inference API doesn't affect the results as expected. This could be due to a few reasons, and I'd be happy to help troubleshoot this with you.
First, please ensure that you are using the latest version of the Ultralytics HUB and related packages, as updates may have addressed this issue. If the problem persists, it might be related to how the confidence threshold is applied in the classification model. Unlike detection models, classification models typically output a single prediction with the highest confidence, which might not be filtered by the confidence threshold in the same way.
To further investigate, you might want to try using a detection model to see if the conf parameter behaves as expected. This could help determine if the issue is specific to classification models.
If you continue to experience this issue, please feel free to provide more details or any additional observations. Your feedback is invaluable in helping us improve our tools. 😊
Thank you for your patience and for being a part of the YOLO community!
@curtinmjc Thank you for raising this issue. I have confirmed that this is a valid issue with the classification model in the Shared Inference API. I have reported this to the development team, and they are currently working on a fix. I will update you as soon as it is resolved. Thank you for your patience and for helping us improve the Ultralytics HUB!
Any progress on this issue? Do you have an ETA for the fix? Thanks.
Thank you for following up!
The issue with the conf parameter in the Inference API for classification models has been confirmed and reported to the development team. They are actively working on a resolution. Unfortunately, I don’t have a specific ETA for the fix yet, but I can assure you it’s being treated as a priority.
In the meantime, you can continue using the existing setup, with the understanding that the conf parameter is currently not affecting classification results as intended. If this is critical to your workflow, please let us know so we can pass on the urgency to the team.
To stay updated, please make sure to watch this thread or the Ultralytics HUB repository. We'll notify you as soon as it’s resolved.
Thank you for your patience and continued support! 😊
It has been about two months since the last update that said the issue is being worked and being treated as a priority. So what is the status?
@curtinmjc Hello! I just asked our team for an update about this issue. I will update you as soon as I have a response.
@curtinmjc hey there, we looked into this some more. conf argument is mainly intended to eliminate low-confidence bounding boxes, segmentation masks and keypoints for those tasks.
For classification models however it's usually not applied directly as an inference arguments, instead the full class vector is returned with the class confidences, or the top-k results are retuned, i.e. top-1 or top-5 and then the user may threshold these based on a confidence.
Does this make sense?
Okay, so then I assume that the 'conf' parameter should not be part of the model request. If true, then you should change the Example Request Python and cURL code that are in your Model Deploy tab because currently they all have 'conf=0.25'.
Sent from Gmail Mobile
On Thu, Jan 23, 2025 at 10:35 AM Glenn Jocher @.***> wrote:
@curtinmjc https://github.com/curtinmjc hey there, we looked into this some more. conf argument is mainly intended to eliminate low-confidence bounding boxes, segmentation masks and keypoints for those tasks.
For classification models however it's usually not applied directly as an inference arguments, instead the full class vector is returned with the class confidences, or the top-k results are retuned, i.e. top-1 or top-5 and then the user may threshold these based on a confidence.
Does this make sense?
— Reply to this email directly, view it on GitHub https://github.com/ultralytics/hub/issues/906#issuecomment-2610135191, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABMSSD7IGXGNIQ6EN6KMLE32MED5HAVCNFSM6AAAAABQ2C6IAKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMJQGEZTKMJZGE . You are receiving this because you were mentioned.Message ID: @.***>
Thank you for pointing this out! You are absolutely correct—the conf parameter is not applicable for classification models since classification tasks typically return the full class confidence vector or the top-k results. Including conf in the example requests for classification models might cause confusion.
We will review and update the example Python and cURL code in the "Model Deploy" section to better reflect the correct usage for classification models. For classification scenarios, users should handle confidence thresholds in their post-processing logic rather than as part of the request.
Your feedback is greatly appreciated—it helps us improve our documentation to better serve the community. Thank you for catching this, and we'll ensure the examples are updated appropriately! 😊
Let us know if you have any further questions or observations.
Check out the "Model Preview" section too because also there.
On Sat, Jan 25, 2025 at 1:44 AM Paula Derrenger @.***> wrote:
Thank you for pointing this out! You are absolutely correct—the conf parameter is not applicable for classification models since classification tasks typically return the full class confidence vector or the top-k results. Including conf in the example requests for classification models might cause confusion.
We will review and update the example Python and cURL code in the "Model Deploy" section to better reflect the correct usage for classification models. For classification scenarios, users should handle confidence thresholds in their post-processing logic rather than as part of the request.
Your feedback is greatly appreciated—it helps us improve our documentation to better serve the community. Thank you for catching this, and we'll ensure the examples are updated appropriately! 😊
Let us know if you have any further questions or observations.
— Reply to this email directly, view it on GitHub https://github.com/ultralytics/hub/issues/906#issuecomment-2613812673, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABMSSD4NFUZHSXBMDKGOH4L2MMXDXAVCNFSM6AAAAABQ2C6IAKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMJTHAYTENRXGM . You are receiving this because you were mentioned.Message ID: @.***>
Thank you for the follow-up! You're absolutely right that the "Model Preview" section should also be reviewed for consistency and accuracy regarding the conf parameter. It's essential to ensure all sections of the documentation reflect the correct usage, especially for classification models where this parameter is not applicable.
We'll share this feedback with the team and include the "Model Preview" section in the review and updates. We truly appreciate your diligence in catching this, as it helps us enhance the user experience and documentation quality.
If you notice anything else or have further suggestions, feel free to share them. Your insights are invaluable to the community! 😊