AliceVision
AliceVision copied to clipboard
content of matches.txt in AliceVision
Hello,
what is the format of the content of matches,txt I get from feature matching with alicevision from command line?? : (example)--> 250654938 687988815 1 sift 749 19 23 1 2 40 56 104 122 12 18 9 13 42 97 13 36 74 87 329 280 22 46 75 137 62 105 88 126 102 138 455 471 94 112 126 127 117 149 105 220 112 187 469 516 161 196
the first line is the image pair ID ok and the matches between the images of the pair are after the line containing "sift" keyword, but what is the exact sequence of the coordinates? the first line is for first image, the second line for the second image, the third is for first image , the fourth is for second image and so on? or are in different order??
This is my interpretation so far:
#viewid1 #viewid2 250654938 687988815 1 #first maching pair sift 749 #number of detected matches 19 23 #referencing (index idx) feature 19 of viewid1 and feature 23 of viewid2 1 2 #referencing (index idx) feature 1 of viewid1 and feature 2 of viewid2
// Read from the text file // I J // nbDescType // descType matchesCount // idx idx // ... // descType matchesCount // idx idx // ...
src/aliceVision/matching/io.cpp
Thank you for your reply,
Therefore how can we print the matches (visualize) between the features of the two images of each pair given matches.txt as the only input??
Sorry I am a newbie I use the command line version of alicevision without gui
You also need the extracted features. The easiest way to visualize your results is in the Meshroom UI.
Here are some more of my notes:
Feature Extraction (SIFT)
desc are binary files feat are ascii ones
view_id.sift.feat -> table with the extracted features view_id.sift.desc -> descriptors
Reference http://www.vlfeat.org/overview/sift.html https://dsp.stackexchange.com/questions/24346/difference-between-feature-detector-and-descriptor
The image origin (top-left corner) has coordinate (0,0) The lower-right corner is defined by the image dimensions For a landscape images 5000x2000 this is (5000,2000)
0-------------------------5000
. x
.
. x
.
. x
. x
2000
view_id.sift.feat Matrix (without column title):
x y scale orientation 2711.52 1571.74 308.335 4.75616
(to plot this, make y negative (*-1))
Scale: square/circle size Orientation: line from origin rotated in radiant
ImageMatching
Matches all images (tree)
197018718 907017304 1638077662 907017304 1638077662
which images are matched against each other. Example
W X Y Z X Y Z Y Z
W will be matched with X, Y, Z, then X with Y and Z and so on
Thank you again,
So if I want to visualize keypoints in both images and connect the matched ones with lines I should find the correspodence between feature ids from matches.txt and coordinates from viewID1.sift.feat and viewID2.sift.feat respectively, multiply y by -1 and scale it with the scale factor? The feature idx is the line number in viewID.sift.feat?
What is the option in meshroom ui to visualize matches on pictures?
So if I want to visualize keypoints in both images and connect the matched ones with lines I should find the correspodence between feature ids from matches.txt and coordinates from viewID1.sift.feat and viewID2.sift.feat respectively, multiply y by -1 yes
and scale it with the scale factor? no, the scale is the scale property of the feature and orientation the orientation property. The position of the feature is just represented by x y. You can find some background info on scale and orientation here: https://www.vlfeat.org/overview/sift.html
The feature idx is the line number in viewID.sift.feat? From what I can tell yes. (I am not the expert on this, I feel more at home with Meshroom ;) )
To familiarize yourself with the inner workings, I´d recommend you to take a look at Meshroom:
What is your goal? To code your own viewer?
Thank you for your answer,
yes I try to code a viewer and when I plot the matches x,y in the image mosaic (composed by the two images of the pair) I get lines extending outside the borders of the two images , I tried to plot y,x and I get all keypoints inside the mosaic so I am wondering if the correct order of the coordinates is first y second x? I assumed that because in the viewID.sift.feat I see values of x coordinates exceeding the dimension of the picture in px and seem reasonable for y if I am not mistaking
Yes, that is possible. I just wrote it down from memory :)