ultralytics icon indicating copy to clipboard operation
ultralytics copied to clipboard

how to obtain metrics like mAPs, mAPm.

Open yaoergogo opened this issue 1 year ago • 7 comments

Search before asking

  • [X] I have searched the YOLOv8 issues and discussions and found no similar questions.

Question

I want to know how to obtain metrics like mAPs, mAPm. I already have the prediction.json file, but I don't know how to get these metrics

Additional

No response

yaoergogo avatar Apr 29 '24 09:04 yaoergogo

Hey there! 🚀 To obtain metrics like mAPs (mAP small) and mAPm (mAP medium), you can use the val mode of YOLOv8. When you validate your model using a dataset that includes ground truth annotations, these metrics will automatically be calculated along with others like mAP50, mAP75, etc.

Given that you already have your predictions.json, you can use a tool like COCOeval if your predictions are in the COCO format. Here's a quick example of how you might use it:

from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval

# Load your ground truth annotations and predictions
cocoGt = COCO("path_to/ground_truth_annotations.json")
cocoDt = cocoGt.loadRes("path_to/predictions.json")

# Initialize COCOeval object
cocoEval = COCOeval(cocoGt, cocoDt, 'bbox')

# Evaluate on a subset or all the data
cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()

If you are using the Ultralytics framework, ensure your model validation step is performed with the respective dataset containing the ground truth. Here's a simplified CLI command:

yolo val model=your_trained_model.pt data=your_dataset.yaml

This command will provide you with a summary of various metrics including mAPs and mAPm when run on your test dataset.

Let me know if this helps or if you need further assistance. Happy detecting!

glenn-jocher avatar Apr 29 '24 13:04 glenn-jocher

能给我一个脚本python文件而不是cli命令吗,因为我的模型有所更改,并且当我使用COCO格式时,结果一直都是-1 image

yaoergogo avatar May 07 '24 08:05 yaoergogo

Hello! 😊 If you need to evaluate your model using Python rather than CLI commands, here’s how you can do it using a Python script:

from ultralytics import YOLO

# Load your model (make sure to adjust 'model_path' and 'data_path' to your own paths)
model = YOLO('path_to_your_custom_model.pt')

# Evaluate the model
results = model.val(data='path_to_your_dataset.yaml')

# Print the results
print(results)

Please ensure your dataset is correctly formatted and matches the annotations used during training. If the results continue to show -1, it might indicate a mismatch in the dataset or an issue during the evaluation process. Double-check your dataset paths and formatting. If the problem persists, feel free to share more details so we can help you troubleshoot further! 🛠️

Let me know if this helps or if there's anything else you need!

glenn-jocher avatar May 07 '24 19:05 glenn-jocher

hi,我用了您的脚本并没有现实maps和mapm,而当我用了cocoeval检测时,发现map精度和用yolo检测出来的精度相差好多,我用yolo的精度为73左右,但cocoeval显示只有57.5%,并且小目标精度很低,想知道有什么办法可以提高。还有一个问题,就是我是用obb检测模式,发现将yolo数据集转化成为coco格式后,是HBB的形式,并不是OBB的格式,想问可以怎么解决呢 屏幕截图 2024-05-07 194503

yaoergogo avatar May 08 '24 07:05 yaoergogo

Hi there! It sounds like you're facing a couple of issues with your model evaluations and format conversions.

  1. Discrepancy in mAP scores: The difference in mAP scores between YOLO and COCOeval could stem from how each tool calculates the precision or handles the detection confidence thresholds. Make sure both evaluations use the same IOU thresholds and confidence levels.

  2. Improving performance on small objects: This is often challenging. You might consider:

    • Adjusting the anchor scales to better match the size of the small objects in your dataset.
    • Incorporating more small object samples into your training set.
    • Employing techniques like image pyramid or multiscale training if not already doing so.
  3. OBB vs. HBB: For OBB (Oriented Bounding Boxes), the standard COCO format, which uses HBB (Horizontal Bounding Boxes), won't suffice. You'll need a dataset format that supports OBB, such as DOTA or you might consider converting your dataset annotations to a format compatible with your requirement. You'd also need to ensure your model is specifically trained to predict OBB.

Here’s a quick suggestion for OBB data format:

# Example to modify bounding box format:
def convert_to_obb(hbb_annotations):
    """ Convert horizontal BB (hbb) to oriented BB (obb). """
    obb_annotations = []
    for ann in hbb_annotations:
        x_center, y_center, width, height = ann['bbox']
        angle = 0  # Assuming rotation angle is zero for example
        obb_annotations.append([x_center, y_center, width, height, angle])
    return obb_annotations

Please double-check your conversion functions and evaluation setups. This adjustment might require significant changes depending on your specific framework or library setup. Let me know if you need any more detailed guidance!

glenn-jocher avatar May 08 '24 14:05 glenn-jocher

我想问您一下,目前我用的是OBB检测,但我想测量小目标的精度,但COCOeval好像不支持OBB的数据集,导致我的prediction.json与我原始的annotations.json格式不符,请问我该怎么解决呢

yaoergogo avatar May 14 '24 11:05 yaoergogo

Hi there! 😊 It sounds like you're encountering format compatibility issues with COCOeval due to using OBB (Oriented Bounding Boxes). Unfortunately, COCOeval indeed only supports HBB (Horizontal Bounding Boxes).

To resolve this, you might consider converting your OBB annotations to HBB for the purpose of evaluation with COCOeval, or you could use a custom evaluation script that supports OBB. Here's a simple example of how you might convert OBB to HBB:

def obb_to_hbb(obb):
    cx, cy, w, h, angle = obb
    # This assumes the angle is 0 and the box is not rotated
    x_min = cx - w / 2
    y_min = cy - h / 2
    x_max = cx + w / 2
    y_max = cy + h / 2
    return [x_min, y_min, x_max - x_min, y_max - y_min]

This function converts an oriented bounding box to a horizontal one by assuming no rotation. For actual rotated boxes, the conversion would be more complex and depend on the rotation angle.

If you need to handle rotations or find a more robust solution, you might need to look into evaluation tools or libraries specifically designed for datasets with OBB, such as those used in satellite imagery or aerial photography analysis. Let me know if you need further assistance!

glenn-jocher avatar May 20 '24 05:05 glenn-jocher

嘿,你好!😊 听起来由于使用 OBB(定向边界框),您遇到了 COCOeval 的格式兼容性问题。不幸的是,COCOeval 确实只支持 HBB(水平边界框)。

要解决此问题,您可以考虑将 OBB 注释转换为 HBB,以便使用 COCOeval 进行评估,或者您可以使用支持 OBB 的自定义评估脚本。下面是一个如何将 OBB 转换为 HBB 的简单示例:

def obb_to_hbb(obb):
    cx, cy, w, h, angle = obb
    # This assumes the angle is 0 and the box is not rotated
    x_min = cx - w / 2
    y_min = cy - h / 2
    x_max = cx + w / 2
    y_max = cy + h / 2
    return [x_min, y_min, x_max - x_min, y_max - y_min]

此函数通过假定不旋转将定向边界框转换为水平边界框。对于实际旋转的盒子,转换会更复杂,并且取决于旋转角度。

如果需要处理旋转或找到更强大的解决方案,则可能需要研究专门为具有 OBB 的数据集设计的评估工具或库,例如用于卫星影像或航空摄影分析的工具或库。如果您需要进一步的帮助,请告诉我!

您好,现在有两个问题希望您能帮忙解答一下: (1)我并没有找到专门为具有OBB的数据集设计的评估工具和库,您能给我一点帮助嘛 (2)如果我放弃走(1)的思路,我想根据dota对小目标的定义,10-50像素点内的为小目标,我想通过定义来实现maps,这个代码需要怎么更改,具体在哪个文件里呢

yaoergogo avatar May 29 '24 14:05 yaoergogo

@yaoergogo 您好!👋 针对您的问题,我提供以下建议和解决方案:

  1. 关于OBB评估工具:确实,找到支持OBB的现成评估工具可能比较困难。一个可能的解决方案是修改现有的评估脚本以支持OBB,或者您可以考虑开发一个简单的评估脚本来处理OBB。这可能需要一些定制编程工作,具体取决于您的具体需求和数据集特性。

  2. 关于定义小目标并计算mAPs:要实现这一点,您需要在评估脚本中添加一个过滤器,以便只计算符合小目标定义的目标的mAP。这通常在进行预测评估的脚本中处理。例如,您可以在处理检测结果之前,先检查每个边界框的尺寸,只有当边界框的宽度和高度都在10到50像素之间时,才将其纳入mAP的计算。具体实现可能需要修改您使用的评估脚本,通常是在计算mAP的部分添加尺寸检查的逻辑。

如果您需要具体的代码示例或进一步的指导,请不吝告知,我很乐意进一步帮助您解决这些问题!

glenn-jocher avatar May 29 '24 20:05 glenn-jocher

@yaoergogo 您好!👋 针对您的问题,我提供以下建议和解决方案:

  1. 关于OBB评估工具:确实,找到支持OBB的现成评估工具可能比较困难。一个可能的解决方案是修改现有的评估脚本以支持OBB,或者您可以考虑开发一个简单的评估脚本来处理OBB。这可能需要一些定制编程工作,具体取决于您的具体需求和数据集特性。
  2. 关于定义小目标并计算mAPs:要实现这一点,您需要在评估脚本中添加一个过滤器,以便只计算符合小目标定义的目标的mAP。这通常在进行预测评估的脚本中处理。例如,您可以在处理检测结果之前,先检查每个边界框的尺寸,只有当边界框的宽度和高度都在10到50像素之间时,才将其纳入mAP的计算。具体实现可能需要修改您使用的评估脚本,通常是在计算mAP的部分添加尺寸检查的逻辑。

如果您需要具体的代码示例或进一步的指导,请不吝告知,我很乐意进一步帮助您解决这些问题!

您能给我一个过滤器的代码示例吗,是在val.py脚本里修改吗

yaoergogo avatar May 30 '24 09:05 yaoergogo

您好!👋 对于在评估过程中添加一个过滤器以计算小目标的mAP,您可以在处理检测结果的部分添加一些代码来实现这一功能。这通常不是在val.py直接修改,而是在处理结果数据的部分进行。以下是一个基本的示例,展示了如何在Python中筛选出小目标并计算它们的mAP:

def filter_small_objects(detections, min_size=10, max_size=50):
    # detections expected to be a list of dicts with 'bbox' and other keys
    filtered_detections = []
    for detection in detections:
        width = detection['bbox'][2]
        height = detection['bbox'][3]
        if min_size <= width <= max_size and min_size <= height <= max_size:
            filtered_detections.append(detection)
    return filtered_detections

# Example usage with your detection results
filtered_results = filter_small_objects(detection_results)
# Proceed with mAP calculation on filtered_results

这段代码定义了一个函数filter_small_objects,它接受检测结果列表和大小范围,返回只包含符合大小要求的检测结果。您需要根据实际的数据结构调整这段代码。

如果您需要进一步集成此逻辑到您的评估流程中或有其他问题,欢迎继续咨询!

glenn-jocher avatar May 30 '24 15:05 glenn-jocher

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

  • Docs: https://docs.ultralytics.com
  • HUB: https://hub.ultralytics.com
  • Community: https://community.ultralytics.com

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

github-actions[bot] avatar Jul 01 '24 00:07 github-actions[bot]