ros_kinfu
ros_kinfu copied to clipboard
Slow Extraction
Hi Rmonica,
during my Thesis I found your ros_kinfu implementation. Great Work! I have a Nvidia GeForce 840M in my Laptop. So I have written a code for msg and action interaction with ros_kinfu. I tried to extract PointClouds, Images or Meshes(Meshes are more important for me). So when extracting Cloud or Mesh an Error says "out of memory". An other Error says(when extracting Mesh): "Error: OS call failed or operation not supported on this OS". Further when Extracting TSDF or Images they are published at very low frequence on the specified result topic.
So do you know what the problems could be? Do I have a wrong understanding of setting the right parameters or is my graphic Card too weak?
Best regards
Hi Thomananas.
The memory usage of KinFu during Marching Cubes (the algorithm that produces the mesh) is expected to be much higher than during normal operation.
I'm now noticing that in practice it is about thrice as much.
I have never tried, but others have worked around it by reducing the TSDF volume resolution (the VOLUME_X
, _Y
, _Z
constants in device.h
).
It also seems that Ubuntu wastes more video RAM than it used to. The command
nvidia-smi
may help you monitor video RAM usage.
Messages on the result topic are not published at any frequency. They are published once, for each request you send.
Also, please see the KNOWN ISSUES section, added by my recent commit.
Hi RMonica,
thank you for your answer. It helped. Further I notice the Error: "Error: TF_DENORMALIZED_QUATERNION: Ignoring transform for child_frame_id "kinfu_current_frame" from authority "unknown_publisher" because of an invalid quaternion in the transform" which appears from time to time. I do not understand why and when exactly this error is caused. Do you know what the problem could be and how it can be solved?
Best regards
Hi Thomananas.
Someone, in some library, implemented a very strict check, apparently.
Should be fixed in https://github.com/RMonica/ros_kinfu/commit/a48eb52bd8d47e2ecb2801af46c1c9829571dfea
Hi RMonica,
I got further problems. As I read in your Documentation the /kinfu_current_view is a synthetic depth map. So what espacially is a synth. depth map. Further I mentioned that this is encoded in rgb8. So is there any depth data stored in length (mm or m) or is there a way to calculate them?
Best regard
Hi Thomananas.
The synthetic depth map is a depth image obtained by observing the 3D model from the current pose of the sensor.
The map published to /kinfu_current_view
is illuminated. The map is transformed into a color image by simulating a virtual light source.
Unfortunately, this means that you can't obtain any useful information from it: it's just nice to look at.
The image is produced by getImage
https://github.com/RMonica/ros_kinfu/blob/master/kinfu/src/kinfuLS.cpp#L199
https://github.com/RMonica/ros_kinfu/blob/master/kinfu/pcl_kinfu_large_scale/kinfu_large_scale/src/kinfu.cpp#L941
Which uses the generateImage kernel for lighting:
https://github.com/RMonica/ros_kinfu/blob/master/kinfu/pcl_kinfu_large_scale/kinfu_large_scale/src/cuda/image_generator.cu#L107
Instead, I guess you could just download raw data from vmaps_g_prev_[0]
(positions) and nmaps_g_prev_[0]
(normals).
Sort-of-pseudocode example:
int useless;
std::vector<float> vertices;
vmaps_g_prev_[0].download(vertices,useless);
for (int v = 0; v < rows; v++)
for (int u = 0; u < cols; u++)
{
float x = vertices[v * cols + u + 0 * rows * cols];
float y = vertices[v * cols + u + 1 * rows * cols];
float z = vertices[v * cols + u + 2 * rows * cols];
// do something with x, y, z
}
Hi RMonica,
thanks for this fast answer. Is it better to code in existing files like kinfuLS.cpp or is it better to create a new file? When creating a new file, which files I need to include to get use of for example this .download function? Is vmaps_g_prev_[0] meant to be of type std::vector<MapArr> or how do i get access to this Array?
Best Regards
Hi Thomananas.
The software architecture depends on what you are trying to do, but here is an idea.
Create another getter besides getImage
in kinfu.cpp
/kinfu.h
.
The getter performs the download and returns the float vector(s).
Then, modify the ImagePublisher
class in kinfuLS.cpp
so that it calls that getter, converts the vector(s) into a point cloud, and publishes it to a ROS topic.
Hi RMonica,
thank you for your idea. Well I started doing it like you explained. I think this is the right thing to do.
Best regards
Hi RMonica,
I got one more question. The vmap_g_prev is the vertex map of the previous frame. Am I right ? And is this really just the raw data or is there some processing done? In my case I want to do data transfer. Is it possible, to get a depthmap (less number of bytes) transfer it and calculate a vertexmap ? Does this make sense or is it better to pub the vmap_g_prev to topic as I do it?
Best regards
Hi Thomananas.
The vertex map is computer-generated. It's not "raw" as the one coming from the sensor.
To convert an organized point cloud into a depth image, you would usually take the z coordinate of each point. However, in this case the map may be in global coordinates, so (probably) you need to convert the points into sensor coordinates first (i.e. multiply by inverse of sensor pose). I would suggest to visualize it in RViz as a point cloud, so you can see if it's the data you need. Optimize later.
Hi RMonica,
thanks for all your answers.
Now i want to extract textures. When I am setting the parameter in parameter.h to true, the node starts, but nothing works. Do you know where the problem is ?
Best regards
Hi Thomananas.
Well, that won't work. I have never implemented texture extraction. Sorry.
http://pointclouds.org/documentation/tutorials/using_kinfu_large_scale.php As you can see, the original KinfuLS was composed of 3 executables, of which the third would integrate the texture. I recall that, when I tried to run the third one, it just crashed. Since I didn't actually need textures, I just gave up and integrated only the first two.
There is also the color volume class in color_volume.h/.cpp. Maybe kinfu somehow supports color integration. However, this is outside my expertise.