mean_average_precision icon indicating copy to clipboard operation
mean_average_precision copied to clipboard

ValueError: cannot reshape array of size 0 into shape (0,newaxis)

Open paulaceccon opened this issue 5 years ago • 5 comments

This might not be the better place for it, but I keep getting this error when adding the predictions and gt:

ValueError: cannot reshape array of size 0 into shape (0,newaxis)

    metric_fn.add(np.array(pred), np.array(gt))
  File "/usr/local/lib/python3.6/dist-packages/mean_average_precision/mean_average_precision.py", line 63, in add
    match_table = compute_match_table(preds_c, gt_c, self.imgs_counter)
  File "/usr/local/lib/python3.6/dist-packages/mean_average_precision/utils.py", line 139, in compute_match_table
    difficult = np.repeat(gt[:, 5], preds.shape[0], axis=0).reshape(preds[:, 5].shape[0], -1).tolist()
ValueError: cannot reshape array of size 0 into shape (0,newaxis)

From the traceback, the issue seems to be happening here:

difficult = np.repeat(gt[:, 5], preds.shape[0], axis=0).reshape(preds[:, 5].shape[0], -1).tolist()

But if perform it manually:

print(pred)
print(gt)
print(np.repeat(gt[:, 5], pred.shape[0], axis=0).reshape(pred[:, 5].shape[0], -1).tolist())

I don't get any error at all:

[[  0.        81.        77.       222.         0.         0.724039]]
[[  0.  83.  72. 184.   0.   0.   0.]]
[[0.0]]

paulaceccon avatar Dec 21 '20 21:12 paulaceccon

Update: this bug has been fixed in the latest build: pip install --upgrade git+https://github.com/bes-dev/mean_average_precision.git (follow the instructions in readme). The instructions in the projects PyPI page (https://pypi.org/project/mean-average-precision/) are outdated.


The error message is misleading since this error is related to the num_classes argument of the metrics: for the script to run properly, all classes should be present in the predictions. However, this is certainly a bug: num_classes should rather refer to the classes of the ground truth.

This will fail:

# create metric_fn
metric_fn = MeanAveragePrecision(num_classes=3)#<-see here

# [xmin, ymin, xmax, ymax, class_id, difficult, crowd]
gt = np.array([
    [439, 157, 556, 241, 0, 0, 0],
])

# [xmin, ymin, xmax, ymax, class_id, confidence]
preds = np.array([
    [429, 219, 528, 247, 0, 0.460851],
    [433, 260, 506, 336, 1, 0.269833], #<-see here: only 2 classes present in the preds
])

# add some samples to evaluation
for i in range(10):
    metric_fn.add(preds, gt)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-118-04771a0da08d> in <module>
     15 # add some samples to evaluation
     16 for i in range(10):
---> 17     metric_fn.add(preds, gt)
     18 
     19 # compute PASCAL VOC metric

~/anaconda3/envs/rdaw/lib/python3.7/site-packages/mean_average_precision/mean_average_precision.py in add(self, preds, gt)
     61             if preds.shape[0] > 0:
     62                 preds_c = preds[preds[:, 4] == c]
---> 63                 match_table = compute_match_table(preds_c, gt_c, self.imgs_counter)
     64                 self.match_table[c] = self.match_table[c].append(match_table)
     65         self.imgs_counter = self.imgs_counter + 1

~/anaconda3/envs/rdaw/lib/python3.7/site-packages/mean_average_precision/utils.py in compute_match_table(preds, gt, img_id)
    137     img_ids = [img_id for i in range(preds.shape[0])]
    138     confidence = preds[:, 5].tolist()
--> 139     difficult = np.repeat(gt[:, 5], preds.shape[0], axis=0).reshape(preds[:, 5].shape[0], -1).tolist()
    140     crowd = np.repeat(gt[:, 6], preds.shape[0], axis=0).reshape(preds[:, 5].shape[0], -1).tolist()
    141     match_table = {

ValueError: cannot reshape array of size 0 into shape (0,newaxis)

But this is OK:

# create metric_fn
metric_fn = MeanAveragePrecision(num_classes=2) #<-see here

# [xmin, ymin, xmax, ymax, class_id, difficult, crowd]
gt = np.array([
    [439, 157, 556, 241, 0, 0, 0],
])

# [xmin, ymin, xmax, ymax, class_id, confidence]
preds = np.array([
    [429, 219, 528, 247, 0, 0.460851],
    [433, 260, 506, 336, 1, 0.269833], #<-see here: both 2 classes present in the preds
])

# add some samples to evaluation
for i in range(10):
    metric_fn.add(preds, gt)

mjkvaak avatar Jan 27 '21 08:01 mjkvaak

Hi, thanks for your feedback. This issue was fixed in master, but pypi package has not this fix yet. I'll try to rebuild pypi package as soon as I can.

bes-dev avatar Feb 01 '21 10:02 bes-dev

I still have the problem above while having 0.0.2.1 (installed with pip install --upgrade git+https://github.com/bes-dev/mean_average_precision.git). Truth and prediction label sets do not equals:

truth_label_set
{0, 1, 2, 4, 5, 6, 8, 9, 10, 11}

pred_label_set
{0.0, 1.0, 2.0, 4.0, 5.0, 6.0, 8.0, 9.0, 11.0}

Moreover, there are addtional labels (3, 7) that are not represented in the test set but are in the training set.

Now, shall I set num_classes=10 (according to the length of truth_label_set) or num_classes=11 (according to the length of pred_label_set), or num_classes=12 (according to the length of all possible labels)?

einareinarsson avatar Feb 18 '21 15:02 einareinarsson

When will you upload the newest package to pypi?

bonastreyair avatar Mar 17 '21 11:03 bonastreyair

it is now live! https://pypi.org/project/mean-average-precision/2021.4.23.0/ Thanks!

bonastreyair avatar Apr 26 '21 09:04 bonastreyair