Leaflet.DistortableImage icon indicating copy to clipboard operation
Leaflet.DistortableImage copied to clipboard

Feature detection, description, and matching to assist stitching (complex)

Open jywarren opened this issue 7 years ago • 15 comments

This demo was pretty impressive:

https://inspirit.github.io/jsfeat/sample_orb.html

image

It'd be really neat to prototype a feature where when dragging an image, any image that roughly overlaps it (say, distance to center is < NW-SE corner span of dragged image?) has this matcher run against it with a relatively high threshold... and a couple lines are drawn on the screen to indicate possible matches to assist someone stitching.

Docs: https://inspirit.github.io/jsfeat/#features2d

Overal library: https://github.com/inspirit/jsfeat

jywarren avatar Oct 21 '18 21:10 jywarren

I would guess green is a 'solid' match?

jywarren avatar Oct 21 '18 21:10 jywarren

@jywarren The demo site doesn't seem to work on my local (further investigation revealed that the server returns a 404 everytime). Is there any way or maybe any other JS lib for this purpose that could maybe provide a little more insight to this which we can implement? I did consider tracking.js but dropped that idea after learning that it's no longer maintained and the builds are failing.

I'm really eager to do this, and with the proper resources, this shouldn't take long! What do you think?

rexagod avatar Feb 19 '19 11:02 rexagod

Oh hmm, shoudn't it be running from the gh-pages branch of that repository? I had gotten it to work when i posted this, has any code changed?

jywarren avatar Feb 19 '19 16:02 jywarren

There's a missing resource I discovered on ngrok-ing it

GET /[object%20MediaStream] 404 8.771 ms - 161
  • There seems to be an unhandled PromiseException (maybe someone in a bit of a rush forgot to catch the errors?)
  • The code looks intact since '17.
  • Upon serving the HTML on HTTPS I got a WebRTL is not defined error.

@jywarren As of now, just to confirm, is this running solid on your local?

rexagod avatar Feb 19 '19 16:02 rexagod

Hmm. I'm not sure... it's possible i downloaded it an ran it locally back then. One clue could be the changes in WebRTC code that forced us to make changes in other repos -- like https://github.com/publiclab/infragram/issues/45 -- but it doesn't look like the same error, maybe?

I think that error with MediaStream indicates that it's trying to grab an image from the WebRTC media stream API but something is going wrong.

jywarren avatar Feb 28 '19 00:02 jywarren

There've been new commits since the gh-pages was last published: https://github.com/inspirit/jsfeat/commits/master . could those fix this?

jywarren avatar Feb 28 '19 00:02 jywarren

Aha - i think it is the same WebRTC error! See how we fixed on this line: https://github.com/publiclab/infragram/pull/46/files#diff-6e7a2e9f1b12926b7586585e4a66c837R87

video.src needs to become video.srcObject!

jywarren avatar Feb 28 '19 00:02 jywarren

@jywarren We are making some progress! This took a lot more work than expected, and I still need to adjust some calibrations and thresholds (also, a lot of reading to do inorder to build a solid understanding of the whole jsfeat codebase, since I didn't find the docs to be that detailed tbh) and maybe add some custom util fns that suit our codebase best! Once all this is solid, I'll jump to integrating it here.

What do you think?

screenshot from 2019-03-06 01-51-04

(Green dots are matches.)

rexagod avatar Mar 05 '19 20:03 rexagod

This is amazing!!!! I think the initial implementation could just be drawing the lines between the two images. We can think about more complex displays later, and just get these basics right. Does that make sense? This is really fabulous!

jywarren avatar Mar 05 '19 20:03 jywarren

@jywarren Below is a list of features/modifications that were incorporated into this since last night.

  • Completely removed all the unnecessary components (video modules, unwanted utils, etc.).
  • Increased sharpness (reduced Gaussian blur) for the inset image to get more accurate results while eliminating outliers (think of this as a balance between both -- outliers and coverage).
  • Calibrated thresholds on the basis of automated test runs (pull all images into an array, pass each pair to the module, then log the number of matches for different eigen/lap thresholds) on a few dozen images and finally took up the best results. As of now, I've set the best ones as the initial params for this module.
  • Converted the refined ORB codebase in an independent PL module. It now takes two images, runs the algorithm against them, and logs the results!

I think the initial implementation could just be drawing the lines between the two images.

You can check out the beta (based on above) here!! :tada:

Matches above are indicated by the keypoints, i.e., the green dots, while the "good matches" are indicated by the green lines.

Also, I'm looking forward to take this (Microscope live stitching, auto-stitch in MapKnitter (magnetic attraction)) along with LDI (Mapknitter UI) as my GSoC project, since I find both of these to be very interesting topics to work on and start drafting a proposal that includes these two, would that be okay?

Thanks!

rexagod avatar Mar 07 '19 00:03 rexagod

Wow, this is very impressive!

Yes, that would be a fine proposal. Thank you!

This is awesome, great work!

On Wed, Mar 6, 2019 at 7:11 PM Pranshu Srivastava [email protected] wrote:

@jywarren https://github.com/jywarren Below is a list of features/modifications that were incorporated into this since last night.

  • Completely removed all the unnecessary components (video modules, unwanted utils, etc.).
  • Increased sharpness (reduced Gaussian blur) for the inset image to get more accurate results while eliminating outliers (think of this as a balance between both -- outliers and coverage).
  • Calibrated thresholds on the basis of automated test runs (pull all images into an array, pass each pair to the module, then log the number of matches for different eigen/lap thresholds) on a few dozen images and finally took up the best results. As of now, I've set the best ones as the initial params for this module.
  • Converted the refined ORB codebase in an independent PL module. It now takes two images, runs the algorithm against them, and logs the results!

I think the initial implementation could just be drawing the lines between the two images.

You can check out the beta (based on above) here https://orb-deploy-8j90kc7vv.now.sh/!! 🎉

Matches above are indicated by the keypoints, i.e., the green dots, while the "good matches" are indicated by the green lines.

Also, I'm looking forward to take this (Microscope live stitching, auto-stitch in MapKnitter (magnetic attraction)) https://publiclab.org/wiki/gsoc-ideas#Microscope+live+stitching,+auto-stitch+in+MapKnitter+(magnetic+attraction) along with LDI (Mapknitter UI) https://publiclab.org/wiki/gsoc-ideas#MapKnitter+UI as my GSoC project, since I find both of these to be very interesting topics to work on and start drafting a proposal that includes these two, would that be okay?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/publiclab/Leaflet.DistortableImage/issues/110#issuecomment-470328703, or mute the thread https://github.com/notifications/unsubscribe-auth/AABfJxfr4J7AWPGMxjPOUcZ7t_xPgcVuks5vUFkKgaJpZM4Xyr_q .

jywarren avatar Mar 07 '19 00:03 jywarren

Noting that this might make it simpler to decide which other DistortableImage instances to detect common points with, at least as a pattern: https://github.com/MazeMap/Leaflet.LayerGroup.Collision

jywarren avatar Mar 13 '19 21:03 jywarren

@jywarren, things are really starting to align now!

I did a bit of research on the engaging "headless web" trend that has been going on for a while now, and why is it considered an apt approach towards testing and automation. It made me realize the extent to which we can improve our testing, gather snapshots, upscale cross-browser tests, and automate in different environments (simple docker containers that'll take a few hours max to initialize) to log (or generate heaps/snaps) whenever there's a breaking change to see if everything is working fine, and easily pipe that to plotsbot, and this is all just right the top of my mind! Such potential!

I'd like you to have a look over here whenever you have some time to spare. It's a really nice place to read about the headless web, or you could just jump straightaway to the newer array of tools developed for the same purpose, each having their pros and cons, which I'd be more than happy to discuss, and which of these should we consider implementing in this module.

What do you think?

orb-cli

Above is a representation of the aforementioned "headless" approach that I incorporated to fetch a stats ORB object that updates every second, thus implementing a CLI (non-GUI) approach to gather data which the UI methods can render stuff upon.

/cc @justinmanley @sashadev-sky

rexagod avatar May 01 '19 20:05 rexagod

Hi @rexagod i'm sorry i haven't had time to check in here. Will do my best to provide some feedback today!

jywarren avatar May 14 '19 19:05 jywarren

Oh wait, ok. This is rad. I love the idea that it would have a CLI option. and i"ll read that article too, thank you!!!

jywarren avatar May 14 '19 19:05 jywarren