ViewEvaluationNet icon indicating copy to clipboard operation
ViewEvaluationNet copied to clipboard

About Ranking Pair Generation!

Open HuaZheLei opened this issue 4 years ago • 3 comments

Hi, thanks for your great job. I want to know how you generate ranking pair according to your annotaions. In the paper, you said

We first generate ranking pairs of the same aspect ratio based on the averaged votes collected in Stage One, because the views of the same aspect ratios have been directly compared. Second, the views that are selected as best ones by more than 3 annotators in Stage Two are considered as the overall best views and will be paired with the remaining views that have been directly or indirectly compared with them. On average, we generate over 100 view pairs for each image, leading to more than 1 million comparative view pairs.

However, it is very unclear for me. And I can not figure out how to generate a same ranking pair like yours, which I think is pretty important for the performance. Cound you provide a ranking pair list your paper uses? Thanks a lot!

HuaZheLei avatar Jul 11 '19 12:07 HuaZheLei

hi thanks for your interest, unfortunately the ranking pairs were generated one the fly just as described in the text. Here I will try to give an example. For simplicity, there are only 2 aspect ratios (ARs). Each has 6 candidates AR1: x_0, x_1, x_2, x_3, x_4, x_5, AR2: y0, y_1 ... y_5, their scores are [0, 1, 0, 1, 0, 0] and [1, 1, 1, 0, 0, 0] respectively. the first set of ranking pairs will be [x_1, x_i[for every x_i == 0]] [x_3, [x_i for every x_i == 0]] [y_1,y_i[y_i==0]] ... in stage 2 annotators picked x_1 to be the best, y_0 second best and y_1 third best. then the second set of ranking pairs will be [x_1, all others] . [y_0, all others except x1] ...

zijunwei avatar Jul 11 '19 13:07 zijunwei

@zijunwei Thank you very much for your reply, but I also get some problems. Just take a real annotaion file for example.

{"scores": [[3, 2, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 4, 0, 1, 1, 0], [2, 1, 0, 0, 0, 0, 3, 0, 0, 4, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0], [0, 0, 1, 0, 0, 1, 0, 1, 4, 2, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 3, 0, 0, 1], [0, 0, 4, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 3, 0, 0, 1], [2, 1, 0, 0, 0, 0, 4, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 3, 0, 0], [1, 0, 0, 0, 1, 0, 3, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 4, 0, 2, 0, 0]], "bboxes": [[96, 0, 576, 480], [48, 0, 432, 384], [192, 96, 480, 384], [240, 48, 624, 432], [1, 85, 385, 469], [148, 90, 532, 474], [48, 48, 560, 432], [192, 144, 576, 432], [125, 182, 509, 470], [110, 56, 494, 344], [49, 111, 433, 399], [233, 42, 617, 330], [192, 0, 552, 480], [96, 0, 456, 480], [192, 96, 407, 384], [15, 19, 360, 479], [290, 17, 635, 477], [59, 4, 404, 464], [48, 144, 560, 432], [0, 48, 640, 408], [144, 144, 528, 360], [48, 0, 560, 288], [243, 174, 627, 390], [86, 241, 470, 457]]}

In this annotation file, there are 6 arrays in "scores" and each array contains 24 score. I guess each array means an annotator's score for 24 boxes. Then I compute the average score and get [1.33, 0.66, 0.83, 0.16, 0.33, 0.66, 1.83, 0.5, 0.83, 1.5, 0.16, 0.16, 0.5, 1.0, 0.5, 0.33, 0.16, 0.33, 0.33, 1.66, 1.0, 1.16, 0.16, 0.33]. All the score are above 0, so I can not generate pairs like " [x_1, x_i[for every x_i == 0]]". This part confused me the most.

Also, in stage 2, as I understand, I should take the top three scores as best boxes. This part is pretty clear now.

HuaZheLei avatar Jul 12 '19 02:07 HuaZheLei

@zijunwei Do you think it would be possible for you to share the code snippet which generates the image pairs?

geraltFromRivia avatar Oct 29 '19 03:10 geraltFromRivia