PySIFT
PySIFT copied to clipboard
Array out of Bounds issue
When I feed in different images to the SIFT class to extract the SIFT features sometimes for some images I keep getting this error: weight = kernel[oy+w, ox+w] * m IndexError: index 9 is out of bounds for axis 1 with size 9
I would truly appreciate if you could fix this as I cannot find the root to the problem. In the case you I have a paypal link. I would gladly donate for all the amazing work you did. This SIFT implementation is a very good one :)
The error is occurring in the following file and line of code: File PySIFT/orientation.py", line 65, in assign_orientation
Thank you for liking the repo. Unfortunately, I don't maintain this repo anymore so I cannot guarantee I'll have time to look into the issue. In case I do, what is the shape of the image you are passing?
Thank you for such a quick response. The shape is 195 by 195 pixels. Do you have any idea if someone actually improved upon the code by fixing such small bugs? Way back you recommended a GitHub page on this link https://github.com/HamidNE/machines-vision-sift-algorithm. But unfortunately it is no longer available.
And I honestly mean it, if you have a PayPal link where I can donate I will gladly do so for the time you spend to fix the bugs.
Kind regards
Aaron
On Wed, 8 Jan 2020, 00:53 SamL98, [email protected] wrote:
Thank you for liking the repo. Unfortunately, I don't maintain this repo anymore so I cannot guarantee I'll have time to look into the issue. In case I do, what is the shape of the image you are passing?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/SamL98/PySIFT/issues/4?email_source=notifications&email_token=ALK5NP5PCSXR3GAO5ERP5YDQ4UIXHA5CNFSM4KDVMTH2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIKWMQY#issuecomment-571827779, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALK5NPYGIC4JBYIWLLA4IBDQ4UIXHANCNFSM4KDVMTHQ .
I appreciate that but I don't think I'd accept donations for this project.
I don't know of any of the forks that have actually made further commits but I just tried on a 195x195 image and it seemed to work. I recently made a commit slightly changing the main.py script. Take a look at the updated readme and try running again. Let me know if you still get the error and we can work from there.
Thank you so much. I'll definitely keep you posted :)
On Wed, 8 Jan 2020, 01:48 SamL98, [email protected] wrote:
I appreciate that but I don't think I'd accept donations for this project.
I don't know of any of the forks that have actually made further commits but I just tried on a 195x195 image and it seemed to work. I recently made a commit slightly changing the main.py script. Take a look at the updated readme and try running again. Let me know if you still get the error and we can work from there.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/SamL98/PySIFT/issues/4?email_source=notifications&email_token=ALK5NP5HWNZWL4L45FCRPF3Q4UPEHA5CNFSM4KDVMTH2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIKZODI#issuecomment-571840269, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALK5NP7EIMRHW2PVT4HGBN3Q4UPEHANCNFSM4KDVMTHQ .
Dear Sam,
- I tried the code after you applied the changes in the main file
using this 195 by 195 image called imgPL.png which I attached to this
email. In this case I set the contrast Threshold t_c = 0.011. The error I
got is as follows:
File "main.py", line 20, in
_ = sift_detector.get_features() File "/home/aaron/Downloads/PySiftnew/sift.py", line 30, in get_features kp_pyr[i] = assign_orientation(kp_pyr[i], DoG_octave)
- File "/home/aaron/Downloads/PySiftnew/orientation.py", line 65, in assign_orientation weight = kernel[oy+w, ox+w] * m* IndexError: index 9 is out of bounds for axis 1 with size 9
-
When running the exact same code with a t_c=0.011 using a 500 by 500 image called *L8_B10_Case1_Ref_gaussian_blur_std_4.png *which I also attached with this email, the code worked. I also attached the output result. Note for both images I tested, the only thing I changed is t_c, the contrast threshold.
With regards to this output, one has 4 images where each image
represents the detected sift keypoints for the last image in each octave. So the image to the furthest right (with the least detected sift features) in the output_result.png represents the final result (where the keypoints found in kp_pyr[3] are plotted). Are my statements correct? Or I didn't understand the output?
I am research support officer at the University of Malta and I am currently doing a Masters of Research in satellite image alignment and my plan is to use your code as it is very faithful to the paper written by David Lowe and adapt your code to satellite images to eventually align them. So I am asking such a question in point 2 as I need to understand which keypoints I will eventually use to match and perform image registration.
In the case I opt to work with your code, as always, I will cite it and reference it in my masters dissertation. In the case, I eventually also publish a paper, the use of your code will also be referenced and cited in that paper.
Sorry for the inconvenience caused.
Kind regards
Aaron
On Wed, 8 Jan 2020 at 02:12,
Thank you so much. I'll definitely keep you posted :)
On Wed, 8 Jan 2020, 01:48 SamL98, [email protected] wrote:
I appreciate that but I don't think I'd accept donations for this project.
I don't know of any of the forks that have actually made further commits but I just tried on a 195x195 image and it seemed to work. I recently made a commit slightly changing the main.py script. Take a look at the updated readme and try running again. Let me know if you still get the error and we can work from there.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/SamL98/PySIFT/issues/4?email_source=notifications&email_token=ALK5NP5HWNZWL4L45FCRPF3Q4UPEHA5CNFSM4KDVMTH2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIKZODI#issuecomment-571840269, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALK5NP7EIMRHW2PVT4HGBN3Q4UPEHANCNFSM4KDVMTHQ .
I can't see the attached images as this was sent to me as a GitHub issue, not an email so I cannot test your specific images. Since running on a 195x195 image works on my end, I'm not sure what else I can do right now to troubleshoot. I would suggest for now, to use a lower number of octaves or a lower s (number of images per octave) when running on smaller images. I suggest this because it is my guess that on smaller images, the higher octaves are downsampled enough that the kernel is larger than the downsampled image.
As for the output, your understanding is correct. In addition to displaying the keypoints, the keypoint pyramid and the feature pyramid (the feature vector for each detected keypoint) are saved to the results/<output prefix>_kp_pyr.pkl and results/<output prefix>_feat_pyr.pkl files.
As for using this code in your masters dissertation, I would highly advise you to familiarize yourself with the code and make changes to it instead of using it out-of-the-box. This is because while superficially the algorithm seems to confirm to the Lowe paper, there could easily be discrepancies that I have not realized. In addition, there is a known bug in scaling the keypoints from the later octaves (see the "Limitations" section of the README).
There is also an old implementation of image matching/alignment in the match.py script which could potentially be used as a starting point for any alignment research.
Thank you for your reply. I will definitely do so. Would it be possible to send me your email address? As I am just curious to see if that image works for you. In the case it's a problem no worries.
And I'm definitely not planning to just use it off the box. In fact I read Lowe's paper in quite some depth.
Kind regards
Aaron
On Wed, 8 Jan 2020, 18:28 SamL98, [email protected] wrote:
I can't see the attached images as this was sent to me as a GitHub issue, not an email so I cannot test your specific images. Since running on a 195x195 image works on my end, I'm not sure what else I can do right now to troubleshoot. I would suggest for now, to use a lower number of octaves or a lower s (number of images per octave) when running on smaller images. I suggest this because it is my guess that on smaller images, the higher octaves are downsampled enough that the kernel is larger than the downsampled image.
As for the output, your understanding is correct. In addition to displaying the keypoints, the keypoint pyramid and the feature pyramid (the feature vector for each detected keypoint) are saved to the results/
As for using this code in your masters dissertation, I would highly advise you to familiarize yourself with the code and make changes to it instead of using it out-of-the-box. This is because while superficially the algorithm seems to confirm to the Lowe paper, there could easily be discrepancies that I have not realized. In addition, there is a known bug in scaling the keypoints from the later octaves (see the "Limitations" section of the README).
There is also an old implementation of image matching/alignment in the match.py script which could potentially be used as a starting point for any alignment research.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/SamL98/PySIFT/issues/4?email_source=notifications&email_token=ALK5NP2YEWW26SBSGO7JFF3Q4YENJA5CNFSM4KDVMTH2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEINKY3Q#issuecomment-572173422, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALK5NPYLWD2KGBWT7GAWA23Q4YENJANCNFSM4KDVMTHQ .
Maybe upload them to google drive or something similar and post a shareable link? I'm not huge on posting my email address in public places.
With that 'failed' image I set the number of blur levels to 3 for each octave and I set a total of 3 octaves. And it worked. And it does make sense because if you have too many blur levels the final level might be downsampled to a point that you won't be able to find any features because the image is too small. Something which is not yet clear to me is why do you keep getting less features in the final result? For instance taking your example picture in the fourth octave there are way less features as compared to the first octave. My guess it is down to the increase in blurness. In the fourth octave there is more blurness as it uses the previous results of the previous octaves.
orientation.py
def assign_orientation(kps, octave, num_bins=36):
.........#1
w = int(2 * np.ceil(sigma))
.........#2
m, theta = get_grad(L, x, y)
theta = 359 if(theta >=360) else theta
descriptors.py
def get_histogram_for_subregion(m, theta, num_bin, reference_angle, bin_width, subregion_w):
.........#3
for i, (mag, angle) in enumerate(zip(m, theta)):
angle = (angle-reference_angle) % 360
angle = 359 if (angle >= 360) else angle
binno = quantize_orientation(angle, num_bin)
keypoints.py
def localize_keypoint(D, x, y, s):
.........#4
#offset = -LA.inv(HD).dot(J)
x_hat = np.linalg.lstsq(HD, J)[0]
offset = x_hat
return offset, J, HD[:2,:2], x, y, s
What is this ?
On Mon, 13 Jan 2020, 08:16 yangninghua, [email protected] wrote:
orientation.py
def assign_orientation(kps, octave, num_bins=36): ......... w = int(2 * np.ceil(sigma)) ......... m, theta = get_grad(L, x, y) theta = 359 if(theta >=360) else theta
orientation.py
def assign_orientation(kps, octave, num_bins=36): .........#1 w = int(2 * np.ceil(sigma)) .........#2 m, theta = get_grad(L, x, y) theta = 359 if(theta >=360) else theta
descriptors.py
def get_histogram_for_subregion(kps, octave, num_bins=36): .........#3 for i, (mag, angle) in enumerate(zip(m, theta)): angle = (angle-reference_angle) % 360 angle = 359 if (angle >= 360) else angle binno = quantize_orientation(angle, num_bin)
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/SamL98/PySIFT/issues/4?email_source=notifications&email_token=ALK5NP45LBCSFTP5XKHERC3Q5QIMHA5CNFSM4KDVMTH2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIXXNEA#issuecomment-573535888, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALK5NP5L4CJVINSCEDBHX7DQ5QIMHANCNFSM4KDVMTHQ .
Code changes to avoid errors
I got the same error as you, I changed the code like this, the code runs normally