hub icon indicating copy to clipboard operation
hub copied to clipboard

After Model training on HUB, while testing model getting below error

Open kumarneeraj2005 opened this issue 1 year ago β€’ 12 comments

Search before asking

  • [X] I have searched the HUB issues and found no similar bug report.

HUB Component

Inference

Bug

Once model training on HUB Pro is completed, during preview model testing for pose detection, your platform reports a problem... could you kindly tell me what the reason is? WhatsApp Image 2024-05-13 at 10 44 50

Environment

No response

Minimal Reproducible Example

No response

Additional

No response

kumarneeraj2005 avatar May 13 '24 06:05 kumarneeraj2005

@kumarneeraj2005 We just tested Pose inference on our end using a model trained on the Ultralytics COCO8-POSE Dataset and everything seems to be working fine.

The error you are seeing could be happening because of:

  1. Your model (very unlikely)
  2. Something wrong with our shared inference endpoint

Can you please share your model ID with us so we can investigate this further?

sergiuwaxmann avatar May 13 '24 08:05 sergiuwaxmann

@sergiuwaxmann
image please check these 3 models which i trained using your Hub Pro, look at the size of models and model name, its unusual Pose m model size is bigger that pose l model.

kumarneeraj2005 avatar May 13 '24 11:05 kumarneeraj2005

@kumarneeraj2005 The size shown in the screenshot you shared represents the size of the model + all exported formats. Just by looking at the screenshot, I imagine the first two models are identical - the only difference being: some exports performed on the the second one.

Please share the model IDs (ID in the URL on the model page).

sergiuwaxmann avatar May 13 '24 11:05 sergiuwaxmann

@sergiuwaxmann https://hub.ultralytics.com/models/ZtmNMtUkGNAS1yqYM2AB

Model - 7 May 2024 12:41

kumarneeraj2005 avatar May 13 '24 11:05 kumarneeraj2005

@kumarneeraj2005 Looks like inference is working correctly for the model ID you provided. API response:

{
    "data": [
        {
            "class": 0,
            "confidence": 0.952075719833374,
            "keypoints": [
                0.596757173538208,
                0.6352880597114563,
                1.0,
                0.6028388142585754,
                0.694744348526001,
                1.0,
                0.6017828583717346,
                0.7683327198028564,
                1.0
            ],
            "name": "kidney"
        }
    ],
    "message": "Inference complete.",
    "success": true
}

Do you still have this issue?

sergiuwaxmann avatar May 13 '24 12:05 sergiuwaxmann

@sergiuwaxmann Yes, I understand; it is working perfectly for you and for me, but a few photographs are presenting that issue. I discovered the problem: your platform's error handling is ineffective.

kumarneeraj2005 avatar May 13 '24 12:05 kumarneeraj2005

@kumarneeraj2005 What do you mean by "error handling is ineffective"? Can you maybe explain what is the issue you are facing (Minimal Reproducible Example) and share one of the images that is causing the issue so that we can improve our platform?

sergiuwaxmann avatar May 13 '24 12:05 sergiuwaxmann

@sergiuwaxmann Keep the threshold high, for example, 80, and if the picture object is faded (yolo could not recognize the object and returned null), the platform should display the correct message instead of an error message.

kumarneeraj2005 avatar May 13 '24 12:05 kumarneeraj2005

@kumarneeraj2005

I can confirm this issue (POSE model): no_results_pose

Expected behavior (DETECT model): no_results_detect

We will fix this issue in the next release (following days) - will keep you updated.

Thank you for bringing this to our attention!

sergiuwaxmann avatar May 13 '24 12:05 sergiuwaxmann

I`m also having this problem. My old models works fine, but new model does not.

New model: https://hub.ultralytics.com/models/MFE4Iwe37kJRfqy9440W Old model: https://hub.ultralytics.com/models/yWb4FSzpQ9wBXqf5WzCX

Beside the preview error, I also meet another error when exporting, while the old model works fine. 图片

Thank you for the hard work and expect this issue to be fixed soon.

suren1986 avatar May 21 '24 15:05 suren1986

@suren1986 The POSE preview issue was solved and the fix will be deployed in the next release (by the looks of it, tomorrow by EOD). If you are having a similar issue for segmentation, it might be the same problem with #691. Can you maybe share an image that has this issue? Or does this issue occur for any image?

Regarding the export, does this issue occur for any new models or just for the model you shared? Can you train a new model for 2-3 epochs and check if you have the same issue? Also, maybe before training a new model, can you try again (it could be that the server faced a temporary issue)? If you still have this issue, can you open a new issue in order to properly log this issue and so that other users can easily find the discussion?

sergiuwaxmann avatar May 21 '24 16:05 sergiuwaxmann

I have same problem either. All of images are failed to doing preview of trained model.

I tried changing both 'Confidence Threshold' and 'IoU Threshold", but it still doesn't works.

Here's my model ID : https://hub.ultralytics.com/models/D9ttDl0mWIyPHisfp4lQ

capture

sctcorp01 avatar May 22 '24 00:05 sctcorp01

@sctcorp01 hi there! Thanks for sharing the details. It seems like you're encountering a known issue with the preview functionality for certain models. We are actively working on a fix for this. In the meantime, could you please try running your model using the API directly with a sample image to see if the inference works outside of the preview environment? Here's a quick example on how to do this using Python:

import requests

# Replace 'MODEL_ID' and 'API_KEY' with your actual model ID and API key
url = "https://api.ultralytics.com/v1/predict/D9ttDl0mWIyPHisfp4lQ"
headers = {"x-api-key": "your_api_key_here"}

with open("path/to/your/image.jpg", "rb") as image_file:
    files = {"image": image_file}
    response = requests.post(url, headers=headers, files=files)
    print(response.json())

This might help us understand if the issue is specific to the preview or a broader problem with the model. Let us know how it goes! πŸš€

pderrenger avatar May 22 '24 04:05 pderrenger

@suren1986 The POSE preview issue was solved and the fix will be deployed in the next release (by the looks of it, tomorrow by EOD). If you are having a similar issue for segmentation, it might be the same problem with #691. Can you maybe share an image that has this issue? Or does this issue occur for any image?

Regarding the export, does this issue occur for any new models or just for the model you shared? Can you train a new model for 2-3 epochs and check if you have the same issue? Also, maybe before training a new model, can you try again (it could be that the server faced a temporary issue)? If you still have this issue, can you open a new issue in order to properly log this issue and so that other users can easily find the discussion?

Thank you for your reply. I have share the image with error in the reply https://github.com/ultralytics/hub/issues/691#issuecomment-2123913634 This error happend in every image I previewed with the model.

suren1986 avatar May 22 '24 05:05 suren1986

@sctcorp01 hi there! Thanks for sharing the details. It seems like you're encountering a known issue with the preview functionality for certain models. We are actively working on a fix for this. In the meantime, could you please try running your model using the API directly with a sample image to see if the inference works outside of the preview environment? Here's a quick example on how to do this using Python:

import requests

# Replace 'MODEL_ID' and 'API_KEY' with your actual model ID and API key
url = "https://api.ultralytics.com/v1/predict/D9ttDl0mWIyPHisfp4lQ"
headers = {"x-api-key": "your_api_key_here"}

with open("path/to/your/image.jpg", "rb") as image_file:
    files = {"image": image_file}
    response = requests.post(url, headers=headers, files=files)
    print(response.json())

This might help us understand if the issue is specific to the preview or a broader problem with the model. Let us know how it goes! πŸš€

Hola! Thank you for your answer.

I tried running your Python code in Visual Studio Code with .png image file. But It doesn't works.

Here's my log message. {'message': 'Unhandled server error.', 'success': False}

Thanks a lot.

sctcorp01 avatar May 22 '24 05:05 sctcorp01

@kumarneeraj2005 @suren1986 @sctcorp01 New release πŸš€ Inference and exports should work fine now.

sergiuwaxmann avatar May 22 '24 11:05 sergiuwaxmann

@kumarneeraj2005 @suren1986 @sctcorp01 New release πŸš€ Inference and exports should work fine now.

Excellent! Thank you for your hard work!

suren1986 avatar May 22 '24 12:05 suren1986

image @suren1986 @sergiuwaxmann It appears that after deployment, functionality broke, and inference is no longer working; please view the attached picture.

kumarneeraj2005 avatar May 26 '24 04:05 kumarneeraj2005

@kumarneeraj2005 I can’t reproduce this issue, but I will investigate further. When you export your model, the weights used for inference do not change.

Are you sure you were receiving inference results previously on the image you are trying now? I ask because it might simply be that the model is unable to detect anything in the current image.

sergiuwaxmann avatar May 26 '24 06:05 sergiuwaxmann

image Yes @sergiuwaxmann I am confirming earlier it was functioning, I have trained three pose models, previously all three models were working well, but suddenly nothing is working, that is, it is not detecting. on your platform. But same model working on my local system after export.

kumarneeraj2005 avatar May 26 '24 06:05 kumarneeraj2005

@kumarneeraj2005 Can you please share the IDs of these models?

sergiuwaxmann avatar May 26 '24 06:05 sergiuwaxmann

image All three pose models, trained on your Hub Platform.

kumarneeraj2005 avatar May 26 '24 06:05 kumarneeraj2005

@kumarneeraj2005 I understand. The model ID is available in the URL of the model. Please share the IDs or URLs so I can identify the models.

sergiuwaxmann avatar May 26 '24 06:05 sergiuwaxmann

@sergiuwaxmann https://hub.ultralytics.com/models/ryUTk4S2t2DbAfEOJINh https://hub.ultralytics.com/models/ZtmNMtUkGNAS1yqYM2AB https://hub.ultralytics.com/models/4xX34oARS5uZC8BUZhJr

kumarneeraj2005 avatar May 26 '24 06:05 kumarneeraj2005

@kumarneeraj2005 Thank you!

sergiuwaxmann avatar May 26 '24 07:05 sergiuwaxmann

@kumarneeraj2005 Thank you!

Could you please let me know , if it's bug or not ?

kumarneeraj2005 avatar May 28 '24 06:05 kumarneeraj2005

Hi @kumarneeraj2005,

We have internally checked the issue and are unable to reproduce it on our end. We trained the models using the official COCO8-pose dataset and found that after training, the inference is working fine. Additionally, there are no issues with exports.

You can verify the model using this link: https://hub.ultralytics.com/models/Y88Hm7b757UzLUO0ZSju?tab=preview

Please let us know if there are any specific steps or configurations you are using that might help us replicate the problem.

yogendrasinghx avatar May 28 '24 07:05 yogendrasinghx

@sergiuwaxmann @yogendrasinghx image Again old issue is coming out, seems your system is broken after bug fixes. image

kumarneeraj2005 avatar May 28 '24 10:05 kumarneeraj2005

@kumarneeraj2005 I will investigate this again. Thank you!

sergiuwaxmann avatar May 28 '24 10:05 sergiuwaxmann

@sergiuwaxmann @pderrenger Could you please tell me if you have a prompt support system? If your platform isn't operating, what's the sense of paying a monthly fee? I'd want to cancel my membership; it appears that your platform is not yet ready for production.

kumarneeraj2005 avatar May 29 '24 11:05 kumarneeraj2005