tf-lift
tf-lift copied to clipboard
feature matching?
how to use the LIFT do the match?0.0
OpenCV has some good toolbox for feature matching : https://docs.opencv.org/3.3.0/dc/dc3/tutorial_py_matcher.html
@TheToadAlly Hi, I tried the toolbox of feature matching that you provided, but I am in trouble with the data format transition. I use this method to transit the value read from h5 file: des1 = f1['descriptors'][()] kp1 = np.float32(f1['keypoints'][:]).reshape(-1,1,2)
And after I ran the code, I got this error:
img3 = cv2.drawMatches(img1, kp1, img2, kp2, good, None, flags=2) TypeError: Expected cv::KeyPoint for argument 'keypoints1'
Could you help me with this question? Or can you send me the matching part code? It bothers me for a really long time.
Thanks!
@TheToadAlly Hi, I tried the toolbox of feature matching that you provided, but I am in trouble with the data format transition. I use this method to transit the value read from h5 file: des1 = f1['descriptors'][()] kp1 = np.float32(f1['keypoints'][:]).reshape(-1,1,2)
And after I ran the code, I got this error:
img3 = cv2.drawMatches(img1, kp1, img2, kp2, good, None, flags=2) TypeError: Expected cv::KeyPoint for argument 'keypoints1'
Could you help me with this question? Or can you send me the matching part code? It bothers me for a really long time.
Thanks!
@ytongbai Hi, I also face this problem. Did you solve that?
@wisemaker Hi,I face the problem same with you,Did you solve that?
Hi guys, maybe its late but you can solve it by using the kp_list_2_opencv_kp_list function available on utils/kp_tools.py
@hudsonmartins Hi,i have solved it in my way ,Anyway, thank you for your reply!
@hudsonmartins Hi,i have solved it in my way ,Anyway, thank you for your reply!
Hi, how did you solve it? please help
@abeermohamed1 Here is my code, hope to help you.
import h5py
import cv2
import numpy as np
img1 = cv2.imread('./1_left.jpg')
img2 = cv2.imread('./1_right.jpg')
f1 = h5py.File('./out/image_left_desc.h5')
f2 = h5py.File('./out/image_right_desc.h5')
kp1 = np.array(f1['keypoints'].value)
kp2 = np.array(f2['keypoints'].value)
des1 = np.array(f1['descriptors'].value)
des2 = np.array(f2['descriptors'].value)
# matcher = cv2.BFMatcher_create(cv2.NORM_L2)
# matches = matcher.match(des1, des2)
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1,des2,k=2)
good = []
for m,n in matches:
if m.distance < 0.7*n.distance:
good.append(m)
height = max(img1.shape[0], img2.shape[0])
width = img1.shape[1] + img2.shape[1]
output = np.zeros((height, width, 3), dtype=np.uint8)
output[0:img1.shape[0], 0:img2.shape[1]] = img1
output[0:img2.shape[0], img1.shape[1]:] = img2[:]
for i in range(len(good)):
left = kp1[good[i].queryIdx][:2]
right = tuple(sum(x) for x in zip(kp2[good[i].trainIdx][:2], (img1.shape[1], 0)))
cv2.line(output, tuple(map(int, left)), tuple(map(int, right)), (0, 255, 0), lineType= cv2.LINE_AA)
cv2.imwrite("./Lift.jpg", output, [int(cv2.IMWRITE_PNG_COMPRESSION), 0])
cv2.imshow('out', output)
cv2.waitKey()
Just as an heads up, it is harmful to do the NN ratio testing for LIFT. That test is only valid for SIFT
@kmyid Hi, is this the right feature matching code?
import h5py import cv2 import numpy as np from utils.kp_tools import kp_list_2_opencv_kp_list
img1 = cv2.imread('input/1.png') img2 = cv2.imread('input/2.png')
f1 = h5py.File('output/1_desc.h5') f2 = h5py.File('output/2_desc.h5')
kp1 = np.array(f1['keypoints'].value) kp2 = np.array(f2['keypoints'].value)
opencv_kp1 = kp_list_2_opencv_kp_list(kp1) opencv_kp2 = kp_list_2_opencv_kp_list(kp2)
des1 = np.array(f1['descriptors'].value) des2 = np.array(f2['descriptors'].value)
bf = cv2.BFMatcher() matches = bf.knnMatch(des1, des2, k=2) good = [] for m, n in matches: if m.distance < 0.4*n.distance: good.append([m])
img5 = cv2.drawMatchesKnn(img1,opencv_kp1,img2,opencv_kp2,good,None,flags=2) cv2.imwrite('output/test12.jpg', img5) cv2.imshow("BFmatch", img5) cv2.waitKey(0) cv2.destroyAllWindows()
No. You are still doing the nn-ratio test here.
@kmyid OK, thank you, and can you provide your matching code?
@abeermohamed1 Here is my code, hope to help you.
import h5py import cv2 import numpy as np img1 = cv2.imread('./1_left.jpg') img2 = cv2.imread('./1_right.jpg') f1 = h5py.File('./out/image_left_desc.h5') f2 = h5py.File('./out/image_right_desc.h5') kp1 = np.array(f1['keypoints'].value) kp2 = np.array(f2['keypoints'].value) des1 = np.array(f1['descriptors'].value) des2 = np.array(f2['descriptors'].value) # matcher = cv2.BFMatcher_create(cv2.NORM_L2) # matches = matcher.match(des1, des2) FLANN_INDEX_KDTREE = 1 index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5) search_params = dict(checks = 50) flann = cv2.FlannBasedMatcher(index_params, search_params) matches = flann.knnMatch(des1,des2,k=2) good = [] for m,n in matches: if m.distance < 0.7*n.distance: good.append(m) height = max(img1.shape[0], img2.shape[0]) width = img1.shape[1] + img2.shape[1] output = np.zeros((height, width, 3), dtype=np.uint8) output[0:img1.shape[0], 0:img2.shape[1]] = img1 output[0:img2.shape[0], img1.shape[1]:] = img2[:] for i in range(len(good)): left = kp1[good[i].queryIdx][:2] right = tuple(sum(x) for x in zip(kp2[good[i].trainIdx][:2], (img1.shape[1], 0))) cv2.line(output, tuple(map(int, left)), tuple(map(int, right)), (0, 255, 0), lineType= cv2.LINE_AA) cv2.imwrite("./Lift.jpg", output, [int(cv2.IMWRITE_PNG_COMPRESSION), 0]) cv2.imshow('out', output) cv2.waitKey()
Thank you
@kmyid Hi, is this the right feature matching code?
import h5py import cv2 import numpy as np from utils.kp_tools import kp_list_2_opencv_kp_list
img1 = cv2.imread('input/1.png') img2 = cv2.imread('input/2.png')
f1 = h5py.File('output/1_desc.h5') f2 = h5py.File('output/2_desc.h5')
kp1 = np.array(f1['keypoints'].value) kp2 = np.array(f2['keypoints'].value)
opencv_kp1 = kp_list_2_opencv_kp_list(kp1) opencv_kp2 = kp_list_2_opencv_kp_list(kp2)
des1 = np.array(f1['descriptors'].value) des2 = np.array(f2['descriptors'].value)
bf = cv2.BFMatcher() matches = bf.knnMatch(des1, des2, k=2) good = [] for m, n in matches: if m.distance < 0.4*n.distance: good.append([m])
img5 = cv2.drawMatchesKnn(img1,opencv_kp1,img2,opencv_kp2,good,None,flags=2) cv2.imwrite('output/test12.jpg', img5) cv2.imshow("BFmatch", img5) cv2.waitKey(0) cv2.destroyAllWindows()
thanks for sharing
Just remove
for m,n in matches:
if m.distance < 0.7*n.distance:
good.append(m)
these lines. Don't do the ratio test.
Remove
good = []
for m, n in matches:
if m.distance < 0.4*n.distance:
good.append([m])
@kmyid Ok, thank you very much!
Remove
good = [] for m, n in matches: if m.distance < 0.4*n.distance: good.append([m])
Hi Professor, I have two questions:
-
Why the NN-ratio test is harmful to LIFT? BF-Knn match return k best matches, and NN-ratio test is just picking up the good matches. Why it will affect LIFT?
-
Also is RANSAC still applicable to LIFT during matching ?
Best regards, Weibo Qiu.
Why the NN-ratio test is harmful to LIFT? BF-Knn match return k best matches, and NN-ratio test is just picking up the good matches. Why it will affect LIFT?
NN-Ratio is bad, because of the fact that the distribution of descriptor distances for positives and negatives are not the same as SIFT. To do this properly, you need to run a test on a dataset to figure out the distance distributions for pos/neg pairs, and then use that threshold.
Also is RANSAC still applicable to LIFT during matching ?
Yes.
Why the NN-ratio test is harmful to LIFT? BF-Knn match return k best matches, and NN-ratio test is just picking up the good matches. Why it will affect LIFT?
NN-Ratio is bad, because of the fact that the distribution of descriptor distances for positives and negatives are not the same as SIFT. To do this properly, you need to run a test on a dataset to figure out the distance distributions for pos/neg pairs, and then use that threshold.
Also is RANSAC still applicable to LIFT during matching ?
Yes.
Thanks so much for your quick reply!
Why the NN-ratio test is harmful to LIFT? BF-Knn match return k best matches, and NN-ratio test is just picking up the good matches. Why it will affect LIFT?
NN-Ratio is bad, because of the fact that the distribution of descriptor distances for positives and negatives are not the same as SIFT. To do this properly, you need to run a test on a dataset to figure out the distance distributions for pos/neg pairs, and then use that threshold.
Also is RANSAC still applicable to LIFT during matching ?
Yes.
One more question:
Is TILDE fine with NN-ratio test? Since Tilde is trained only on keypoints extraction, I generated descriptor using ORB.
Thanks in advance.
Best regards, Weibo
Using NN-Ratio test with ANY descriptor is not fine. You need to use different ratio's for that. Which is why we did not do that test in comparing different methods.
Dear Professor,
In matching stage, did you use Brute Force Hamming distance matching for LIFT? If you use Hamming distance, so LIFT is a binary descriptor?
Thanks!
Best regards, Weibo.
We don't, because it's not a binary descriptor.
On Thu, Jun 13, 2019 at 6:02 AM qiuweibo [email protected] wrote:
Dear Professor,
In matching stage, did you use Brute Force Hamming distance matching for LIFT? If you use Hamming distance, so LIFT is a binary descriptor?
Thanks!
Best regards, Weibo.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/cvlab-epfl/tf-lift/issues/22?email_source=notifications&email_token=ABIK5W4MUUGKLSRS42TKCX3P2JANJA5CNFSM4FKB4VD2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXTTUKY#issuecomment-501692971, or mute the thread https://github.com/notifications/unsubscribe-auth/ABIK5W2QMHWYA4RLV7OCMGTP2JANJANCNFSM4FKB4VDQ .
We don't, because it's not a binary descriptor. … On Thu, Jun 13, 2019 at 6:02 AM qiuweibo @.***> wrote: Dear Professor, In matching stage, did you use Brute Force Hamming distance matching for LIFT? If you use Hamming distance, so LIFT is a binary descriptor? Thanks! Best regards, Weibo. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#22?email_source=notifications&email_token=ABIK5W4MUUGKLSRS42TKCX3P2JANJA5CNFSM4FKB4VD2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXTTUKY#issuecomment-501692971>, or mute the thread https://github.com/notifications/unsubscribe-auth/ABIK5W2QMHWYA4RLV7OCMGTP2JANJANCNFSM4FKB4VDQ .
Then what matching method do you suggest me to use since I am doing evaluation on LIFT?
For example, in opencv library, cv2.BFMatcher(), they have cv2.NORM_L2 that is appropriate for SIFT, SURF; cv2.NORM_HAMMING that is good for ORB that used Hamming distance as measurement.
@qiuweibo, I think you can use cv2.NORM_L2, because LIFT is similar to SIFT.
@Shu-HowTing Excuse me, what is your h5py library version? My h5py version is 2.10.0 and I tried for feature match but a warning occurred: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead. I wonder whether my h5py version is suitable for the feature match code. So I want to make sure about the h5py version. Thanks for reply.
That's just a warning saying that you should change the code to match the new way of reading data.