LaserWeb4
LaserWeb4 copied to clipboard
🦄 Optical Mark recognition
Yeah, we have the unicorns back! What about this kind of thing https://github.com/jcmellado/js-aruco to jog detect work start, I mean, precisely warn of material placement in order to cut for example, previously printed cardboard or so?
@openhardwarecoza Other way to implement a placement tracker could be with a reflectance sensor matrix https://es.aliexpress.com/store/product/10PCS-lot-QRE1113-new-in-stock/125218_32707304995.html --- maybe an i2c board with a PCF7485 ?
Personally, i'd go for a camera approach, since it doesn't involve licking anyone to get firmware support for the IR (;. More under our control
On Jun 26, 2017 10:17 AM, "jorgerobles" [email protected] wrote:
@openhardwarecoza https://github.com/openhardwarecoza Other way to implement a placement tracker could be with a reflectance sensor matrix https://es.aliexpress.com/store/product/10PCS-lot- QRE1113-new-in-stock/125218_32707304995.html --- maybe an i2c board with a multiplexer?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-310993189, or mute the thread https://github.com/notifications/unsubscribe-auth/AHVr291VTYrLVxhIh4LLJ_IHB0ISLT_Uks5sH2kDgaJpZM4OEXzH .
FYI, as part of the smoothie project, we have a sub-project we are working on that is related to that feature ( not named yet ) it's essentially a program that would run on a raspi ( or raspi zero ), and that offers a web server ( ajax api ) to allow other programs ( like laserweb, visicut or fabrica ) to take pictures ( via one or more webcam plugged into the raspi ) and have opencv operations done onto them ( like stitching of mosaics, reformation of fisheye pictures, corner/edge finding ). the whole assembly would be on the head of the fabrication machine. we are actually paying a consultant to work on this project and plan to release it both as a fully open-source project, and as a ready-to-use product
the final product would include :
- a raspi zero for the brains
- a 200x usb microscope for precise edge finding
- a fisheye camera to take quick global images of the work area to find what's on it
- a laser to help with edge finding and distance measurement ( height to plate )
- a time-of-flight distance sensor to help with fast work area height scanning
- one IR and one UV spectrographic sensor, which hopefully will allow us to detect spectra of material and auto-detect if something is wood or plastic for example
- an arm and a motor to auto-retract from the sensing position when not in use
users would then be able to just install this on the head, tell their favorite host program what it's ip is, and that host program would then be able to use it to :
- capture ( including a z-height map ) what's on the work area and show it to the user, allowing the user to align imported files with the actual workpiece. it's not just a network camera, it actually moves the head at several points on the work area and joins the pictures together to make a single work area picture
- precisely find the corner of things ( very important for cnc ) with very minimal user effort
- produce 3D scans of what's on the work area, including "simplification to basic solids" for better UI representation of the workpiece
- other stuff we need to test a bit more before talking about publicly the system would do most of the thinking ( using opencv mostly ) and would leave the host with as simple and as easy as possible an interface, so the system would be pretty easy to integrate ( just add a bunch of modals that pop up at the right moments, do a few ajax calls and display a few pictures )
just thought I'd mention this as you guys seem to be thinking about similar problems, and maybe knowing this is in the works might have some importance
On Mon, Jun 26, 2017 at 10:17 AM, jorgerobles [email protected] wrote:
@openhardwarecoza https://github.com/openhardwarecoza Other way to implement a placement tracker could be with a reflectance sensor matrix https://es.aliexpress.com/store/product/10PCS-lot- QRE1113-new-in-stock/125218_32707304995.html --- maybe an i2c board with a multiplexer?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-310993189, or mute the thread https://github.com/notifications/unsubscribe-auth/AAGpFdjKNtDBjiRs8BDdmmulemny6AWJks5sH2kCgaJpZM4OEXzH .
-- Courage et bonne humeur.
That's a really good news. And seems a full blown contraption :)
http://recordit.co/JmShd46wkk ... not bad being JS
@jorgerobles recordit doesnt do it justice! Low FPS because of recordit. Trying it live with own camera - i was stunned. That multimarker demo realtime recognises over 40 targets!
Yes! The Earth demo (estimating position) is slower, but I think could work decently. I need to make a test with a shape, and check deviation of a manual aligned work. If that works well enough, could be gold. at least with diode cutter. A perfect setup could rotate all the artwork on LW to match paper marker. :D
https://trackingjs.com (eg https://trackingjs.com/examples/color_camera.html ) is also nice and fast. You gave me a new bug: computer vision was always out of reach, but now thats its JS its up my skill level at last! Dont know what i want to do with it, but i know i want to use it for something
😈
El 27 jun. 2017 19:50, "Peter van der Walt" [email protected] escribió:
https://trackingjs.com (eg https://trackingjs.com/ examples/color_camera.html ) is also nice and fast. You gave me a new bug: computer vision was always out of reach, but now thats its JS its up my skill level at last! Dont know what i want to do with it, but i know i want to use it for something
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-311434638, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoIYGOg4bGtOFWMaA9X9Xi92B4ClTYAks5sIUDxgaJpZM4OEXzH .
I've done some tests of alignment with JS aruco and are promising My setup:
Actually the tests are very basic:
-
Positioned camera over the Marker.
-
Moving material until get an acceptable position (x0, y0, pitch roll etc)
-
Several tests to calculate the camera offsets (Couple of them found -20,3mm in my setup)
-
Run!
Caveats:
- Due to my magnificent $4 camera the marker estimation is not optimum but enough!
- Have to shift workspace via G92. If implemented, better offset method will be done.
- The addhoc pen holder does not mantains pen squareness
But hey, I've got no Unicorn, but a stubborn Donkey!
Closer!
So, trying to not to derail UI even more, and making this useful, need some advices.
I've planned to add these settings on camera settings folder:
- Use OMR (toggle): toggles image postprocessing on/off on camera feed (CPU intensive)
- marker size (mm)
- offset [x,y]: will affect the gcode generation offset.
- marker generator? popups an svg with a random marker. Optional.
Intended usage:
- setup your machine with a camera and a somewhat known offset
- search for zero manually, jogging until estimator goes almost zero on all values (depends on setup precision, of course)
- set zero.
- generated gcode will shift the coordinates the settings OMR offset if active.
What do you think?
offset [x,y]: will affect the gcode generation offset.
I'm uncomfortable with this. I thought mark recognition was to aid setting zero.
@tbfleming Should be better run a G92 X{-xoffset} Y{-yoffset}? I mean, that should be better of course :smile:. Is that the way to proceed?
I suspect G92 may cause confusion. @cprezzi ?
Some sci-fi could include applying detected transform to the document :smiley:
Hmmm. Do you plan on doing rotate? Unfortunately grbl can't do that using offsets. HAAS can, but that's a bit out of reach of our users...
Well my approach would be rotate the document on LW. Simplest, maybe. But is not a must. First things first, It will suffice to do the zero offset.
Since you're going to do rotate, might as well handle it in cam. Maybe a transform2d argument passed into preflight.
1 marker might not be enough. e.g.:
- Assume only a 1 degree error detecting rotation (I suspect it will be worse)
- You're cutting a 100mm by 100mm rectangle
- Lower-left corner of rectangle is perfectly aligned
- Lower-right corner's y value will be 1.7mm off
Yes, I foresaw that issue. That's why on first instance rotation will be skipped. A 2/3 point registration could be done. At least registered with current position, like reprap manual registration.
El 2 jul. 2017 8:11 p. m., "Todd Fleming" [email protected] escribió:
1 marker might not be enough. e.g.:
- Assume only a 1 degree error detecting rotation (I suspect it will be worse)
- You're cutting a 100mm by 100mm rectangle
- Lower-left corner of rectangle is perfectly aligned
- Lower-right corner's y value will be 1.7mm off
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-312507671, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoIYJ7KiDd3P1lt48_BvZxy2LCakZO1ks5sJ91lgaJpZM4OEXzH .
If the goal is stock rotation, why not use the built in xyz probing already in all firmwares, using that to calculate rotation.
On Jul 2, 2017 8:20 PM, "jorgerobles" [email protected] wrote:
Yes, I foresaw that issue. That's why on first instance rotation will be skipped. A 2/3 point registration could be done. At least registered with current position, like reprap manual registration.
El 2 jul. 2017 8:11 p. m., "Todd Fleming" [email protected] escribió:
1 marker might not be enough. e.g.:
- Assume only a 1 degree error detecting rotation (I suspect it will be worse)
- You're cutting a 100mm by 100mm rectangle
- Lower-left corner of rectangle is perfectly aligned
- Lower-right corner's y value will be 1.7mm off
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-312507671 , or mute the thread <https://github.com/notifications/unsubscribe-auth/ABoIYJ7KiDd3P1lt48_ BvZxy2LCakZO1ks5sJ91lgaJpZM4OEXzH> .
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-312508168, or mute the thread https://github.com/notifications/unsubscribe-auth/AHVr26g-JhhAeKNRj80XtzGNOWv2sDRJks5sJ999gaJpZM4OEXzH .
Well. My primary objective is/was have a decent registration point to make cardboard models :) i didn't make my mind to other applications so far.
Seems G10 L2 should be used, http://linuxcnc.org/docs/2.6/html/gcode/gcode.html#sec:G10-L10 isn't it?
Yes. Refer to existing set zero implementation
On Jul 2, 2017 9:55 PM, "jorgerobles" [email protected] wrote:
Seems G10 L2 should be used, http://linuxcnc.org/docs/2.6/ html/gcode/gcode.html#sec:G10-L10 isn't it?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-312513333, or mute the thread https://github.com/notifications/unsubscribe-auth/AHVr2zBWJzQd6z4No2B1vercojYn9F7bks5sJ_WagaJpZM4OEXzH .
Anyways means that have to be set/reset on job start/end So ...
offset [x,y]: will affect the gcode generation offset.
There is a big difference between G10 L2
and G92
(and not every firmware handles it the same way):
-
G10 L2 P1
is setting the offsets only for G54 coordinate system to the given values. -
G92
is setting the actual position to given values and is shifting the offsets of all coordinate systems accordingly. - The frontend should not send these commands directly. Instead, the setZero command should be sent to the backend, which executes firmware specific commands.
@jorgerobles How about a button "find mark" on the jog tab, that searches for the mark, moves to the calculated zero position and sends setZero to the backend?
Yes, in order to affect offset, as camera is placed away from tool, could you add some command to set it? :)
El 2 jul. 2017 10:39 p. m., "Claudio Prezzi" [email protected] escribió:
There is a big difference between G10 L2 and G92 (and not every firmware handles it the same way):
- G10 L2 P1 is setting the offsets only for G54 coordinate system to the given values.
- G92 is setting the actual position to given values and is shifting the offsets of all coordinate systems accordingly.
- The frontend should not send these commands directly. Instead, the setZero command should be sent to the backend, which executes firmware specific commands.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/LaserWeb/LaserWeb4/issues/354#issuecomment-312515662, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoIYCJyy7OJAPQHOI_G-cGSmvUiEOAyks5sKAATgaJpZM4OEXzH .
Overlapped answer! I think search for the code is overly complicated. I bet for jog to the marker, and once adjusted, click a set marker button. Also, as said before, the marker is not really zero. Well so I imagined, maybe I'm wrong