pytorch-dense-correspondence
pytorch-dense-correspondence copied to clipboard
Unit test to measure the accuracy of correspondence generation
Hello,
I have implemented this work with Docker and without docker as well (to debug and to understand it).
My question is as follows:
From what I understand the quantitative evaluation was done considering the correspondences as the ground truth label. In the YouTube video of the talk about this paper, it was said some sort of manual labeling was done for cross instance and cross configuration object categories. If we consider the Single object within scene category, as here correspondences are generated during runtime, do you have any unit test to verify the accuracy of correspondences being generated?