Mask_RCNN
Mask_RCNN copied to clipboard
How to calculate TP , TN , FP , FN ?
Hi guys,
I would like know how to calculate tp,tn,fp,tn on prediction mask rcnn?
If you take only into account mask (and not class prediction) you can use the compute_matches function in utils.py : This function return vectors gt_match and pred_match : If gt_match[i] = -1, then the mask gt_masks[i] is not detected, it is a FN then. If gt_match[i] = j >-1, then the mask gt_masks[i] is predicted by mask pred_masks[j]. We have a TP then. If pred_match[j] = -1, we have a prediction without gt (ground truth) mask, it is a FP then.
There is not TN in this context.
@awalshz, Thanks for the reply. In some cases, I am getting gt_match = [0. 1.] and pred_match = [ 0. 1. -1. -1. -1.] Is this prediction considered as TP or FP?
In one image you have TP, FP and FN masks. In this case you have a image with 2 object (two masks) and you get 5 predicted masks. The two first are TP and the other are FP. You don't have FN
The whole image can not be classified as TP or TN or anything
Hi, i write it's in Object detection terms for bouding boxes. It's works for only one class detection for one image. Only what you need - extract GT boxes from coco and predicted boxes from model results
def get_iou(a, b, epsilon=1e-5, intersection_check=False):
x1 = max(a[0], b[0])
y1 = max(a[1], b[1])
x2 = min(a[2], b[2])
y2 = min(a[3], b[3])
width = (x2 - x1)
height = (y2 - y1)
if (width < 0) or (height < 0):
if intersection_check:
return 0.0, False
else:
return 0.0
area_overlap = width * height
area_a = (a[2] - a[0]) * (a[3] - a[1])
area_b = (b[2] - b[0]) * (b[3] - b[1])
area_combined = area_a + area_b - area_overlap
iou = area_overlap / (area_combined + epsilon)
if intersection_check:
return iou, bool(area_overlap)
else:
return iou
def calc_conditions(gt_boxes, pred_boxes, iou_thresh=0.5, hard_fp=True):
gt_class_ids_ = np.zeros(len(gt_boxes))
pred_class_ids_ = np.zeros(len(pred_boxes))
tp, fp, fn = 0, 0, 0
for i in range(len(gt_class_ids_)):
iou = []
for j in range(len(pred_class_ids_)):
now_iou, intersect = get_iou(gt_boxes[i], pred_boxes[j], intersection_check=True)
if now_iou >= iou_thresh and intersect:
iou.append(now_iou)
gt_class_ids_[i] = 1
pred_class_ids_[j] = 1
if len(iou) > 0:
tp += 1
fp += len(iou) - 1
fn += np.count_nonzero(np.array(gt_class_ids_) == 0)
fp += np.count_nonzero(np.array(pred_class_ids_) == 0)
return tp, fp, fn
For binary segmentation, you can get the predicted mask for each image and compare it with the true mask using this code https://gist.github.com/aunsid/b28c87f98983f00163f6e588e3da1191
You can get implementation for mask based calculution TP, FP, FN from here
@konstantin-frolov thanks for the code.
Hi, i write it's in Object detection terms for bouding boxes. It's works for only one class detection for one image. Only what you need - extract GT boxes from coco and predicted boxes from model results
def get_iou(a, b, epsilon=1e-5, intersection_check=False): x1 = max(a[0], b[0]) y1 = max(a[1], b[1]) x2 = min(a[2], b[2]) y2 = min(a[3], b[3]) width = (x2 - x1) height = (y2 - y1) if (width < 0) or (height < 0): if intersection_check: return 0.0, False else: return 0.0 area_overlap = width * height area_a = (a[2] - a[0]) * (a[3] - a[1]) area_b = (b[2] - b[0]) * (b[3] - b[1]) area_combined = area_a + area_b - area_overlap iou = area_overlap / (area_combined + epsilon) if intersection_check: return iou, bool(area_overlap) else: return iou def calc_conditions(gt_boxes, pred_boxes, iou_thresh=0.5, hard_fp=True): gt_class_ids_ = np.zeros(len(gt_boxes)) pred_class_ids_ = np.zeros(len(pred_boxes)) tp, fp, fn = 0, 0, 0 for i in range(len(gt_class_ids_)): iou = [] for j in range(len(pred_class_ids_)): now_iou, intersect = get_iou(gt_boxes[i], pred_boxes[j], intersection_check=True) if now_iou >= iou_thresh and intersect: iou.append(now_iou) gt_class_ids_[i] = 1 pred_class_ids_[j] = 1 if len(iou) > 0: # если gt box пересекает с нужным больше 1 pred tp += 1 # один с наивысшим IoU - TP fp += len(iou) - 1 # все остальные - FP fn += np.count_nonzero(np.array(gt_class_ids_) == 0) fp += np.count_nonzero(np.array(pred_class_ids_) == 0) return tp, fp, fn
Hello.Plz answer my question. It's works for only one class detection for one image. My dataset has many detection.Is it okay if there are multiple objects in one image?
My dataset has many detection.Is it okay if there are multiple objects in one image?
No. You need rewrite this code for checking class of bounding boxes and recalculate TP, FP, FN if the classes don't match.
My dataset has many detection.Is it okay if there are multiple objects in one image?
No. You need rewrite this code for checking class of bounding boxes and recalculate TP, FP, FN if the classes don't match.
thanks. but I find compute_recall in utils.py. It looks like I can do the same with this.
is there any way to calculate the TP, FP, FN for faster R-CNN with multi classes?
@konstantin-frolov can u help me to rewrite your code that will work for multiple classification as u u said your code onnlyy works for single one