purei9_unofficial
purei9_unofficial copied to clipboard
Add possibility to list maps, zones and to clean only a certain zone
:white_check_mark: List maps :white_check_mark: List zones :clipboard: Add possibility to pass a zone to the clean command
Added (Simple) ability to list maps and zones for API v1 in commit a8a26f7
$ python3 -m purei9_unofficial cloud -v 1 -c ... maps -r 900395798357985798375972
+--------------------------------------+------+------------+
| id | name | zones |
+--------------------------------------+------+------------+
| 00000000-0000-0000-0000-000000000000 | map1 | ['zone1'] |
+--------------------------------------+------+------------+
Cleaning a specific zone is possible since f459fc236f5bd9dead00737d12236fa48af525d4 on the v2 API, but this one lacks getting map images
Also implemented for v1 in f5088f79949bc4a612b614fc4c478672f942e54f
It would be awesome to be able to download map images for v2 API. Is there anything I can do to help?
Thanks for your hard work.
I checked again, but there are no images in the classic sense anymore (The old API had server-generated png's). It's some kind of json which needs to be plotted. I added some code if you really want to look at it: https://github.com/Phype/purei9_unofficial/blob/master/src/purei9_unofficial/cloudv2.py#L273
Its 8KByte of gzipped json which looks like this:
{ "chargerPose": { "t": 1000, "xya": [ 0, 0, 0.0001265096 ] }, "crumbs": [ { "t": 1000, "xy": [ -0.2516245, 0.32190162 ] }, { "t": 1000, "xy": [ -0.14708723, 0.32914242 ] }, { "t": 1000, "xy": [ -0.04283733, 0.3379979 ] }, { "t": 1000, "xy": [ 0.040577278, 0.34455898 ] }, { "t": 1000, "xy": [ 0.032779854, 0.4312258 ] }, [...]
Even if we could plot it, i'm unsure if it makes sense to put effort into this given that they changed the app and API once again, so any work has to be re-done in some months.
Interesting.
When time allows it, I'm going to play around a bit and see if I can't throw this into an SVG and hopefully generate something useful. Not going to spend a lot of time on it since, as you mention, it might be in vain if they change it. Still, can be a fun project for me. 🙂
I've had a chance to play around with plotting the map data and managed to get some reasonable results. Once I've made my code a bit more robust, I'll create a PR.
Findings so far:
- Currently the sequence number is hardcoded: https://github.com/Phype/purei9_unofficial/blob/665f4fdc5e827ae5665682c9adf03f8a0af1aec3/src/purei9_unofficial/cloudv2.py#L278 For me, the hardcoded value of '0' returned a specific cleaning run as opposed to my saved map. The correct sequence number is given when the list of maps is requested (in the 'sequenceNumber' field of the json in getMaps()). This has been added to my v2.1 API code in #23.
- The crumbs are individual points on the map visited by the robot (i.e., breadcrumbs). The 't' field appears to refer to a transform ID, the transform data being given at the end of the file. The xy field is the coordinates of the point in the local coordinate system (i.e., untransformed). The transforms at the end of the file contain their ID, 't', and an xya field. x and y are translations and a is a rotation angle in radians. The formula below resulted in a map that looked correct: (x_global,y_global) = R(-1a_transform)[(x_pt,y_pt)-)(x_transform,y_transform)], where R(a) is the R2 rotation matrix for the angle a. https://wikimedia.org/api/rest_v1/media/math/render/svg/678b1828be1c7064bc3f25cd1cb323f88f0d8acf
(edited to fix formatting, image not rendering and correct formula)
Here's a preview of the mapping comparing the map drawn by the Electrolux One app and the mapping that I'm working on:
It's not ready for a pull request but you can preview the work here: https://github.com/louisjennings/purei9_unofficial/blob/871b17aaa54b62d3c2e266d347f4b077d90b7b62/src/purei9_unofficial/plot_map.py
I've tried a few map generation algorithms:
- Filling a volume around points: If the diameter of the points is set to the diameter of the robot, we get a fairly accurate map with very little effort. However, the edges are not particularly well defined and the map looks rather lumpy. This is what was previously posted (setting the point size by hand).
-
Alpha shapes: This is a fairly commonly used algorithm for generating polygons/meshes from pointclouds and produces reasonable results if the parameter, alpha, is tuned. The images below show a few different settings for alpha (strictly speaking I should be expanding the resultant polygons to account for the width of the robot). The downsides of this method are the requirement for tuning (though maybe there's a single value that would work for everyone) and no guarantee that the best value of alpha will produce a good result (I like alpha=6 the most but it creates some issues with holes and separated polygons).
-
Marching squares: I'm fairly certain that this is the algorithm used by the official Electrolux app (the 45 degree angles on holes and gridlike shape give it away). I haven't implemented code for this yet but had a go at drawing the boundaries manually. It definitely looks a lot like the Electrolux images but isn't quite a perfect match - my squares are slightly too small and the shaded region around each point may be slightly large. Once I've got working code for this, tuning those parameters should be a lot simpler. It looks like the official app does not use interpolation (hence the regular 45 degree angles) or attempt to resolve ambiguous cases.
Great working! I'm excited about the results and the ability to implement this in Home Assistant. 🙂
@louisjennings great work! Assuming we can draw the map, the qeustion @Ekman would be what format it should ideally be in for homeassistand to display it?
Just some Options:
- Rasterized PNG in a byte array
- Maybe SVG
I've added some code using PIL (because i think having a dependency on matploitlib wouln't be great) into the image branch which comes out pretty good i think:
CloudMap.getImage() returns a byte array containing a PNG whoch can be displayed. For integration in homeassistant, i thing the vaccuum integration seems to have a MAP
capability (See homeassistan_doku), but i couldn't find out how to use it. Other people seemed to create a Camera integration, but this seems mor like a hack.
TODO: For me this the code works, but it doesn't show the correct map. In the end we would probably want to display the map from the last run, not the stroed "Map" which is used to define Clening and Avoid zones.
I'll make it work as long as it's an image. I'm a huge fan of SVG since it's so lightweight, but any format works really. :slightly_smiling_face:
It was a while ago that I studied other vacuum integrations, but if I remember correctly then the camera is the way to go.