open-solution-salt-identification
open-solution-salt-identification copied to clipboard
Confusion Matrix
As per the discussions on Kaggle, yours implementation is the only implementation that is fully correct for the given metric but there is one thing that I couldn't understand as per your code. Here are these three functions:
def compute_ious(gt, predictions):
gt_ = get_segmentations(gt)
predictions_ = get_segmentations(predictions)
if len(gt_) == 0 and len(predictions_) == 0:
return np.ones((1, 1))
elif len(gt_) != 0 and len(predictions_) == 0:
return np.zeros((1, 1))
else:
iscrowd = [0 for _ in predictions_]
ious = cocomask.iou(gt_, predictions_, iscrowd)
if not np.array(ious).size:
ious = np.zeros((1, 1))
return ious
def compute_precision_at(ious, threshold):
mx1 = np.max(ious, axis=0)
mx2 = np.max(ious, axis=1)
tp = np.sum(mx2 >= threshold)
fp = np.sum(mx2 < threshold)
fn = np.sum(mx1 < threshold)
return float(tp) / (tp + fp + fn)
def compute_eval_metric(gt, predictions):
thresholds = [0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]
ious = compute_ious(gt, predictions)
precisions = [compute_precision_at(ious, th) for th in thresholds]
return sum(precisions) / len(precisions)
Now, given the fact that compute_ious
function works on a single prediction and it's corresponding groundtruth, ious
will be a singleton array. Then, how are you calculating TP/FP
from that? Am I missing something here?
Hmm I have to say that I simply ran this evaluation (that was written for DSB-2018), changed iou to coco implementation for speed, added handling for empty prediction and just lived with it :).
That being said I will look into it and get back to you.
Thank you @AakashKumarNain for pointing this out.