KPConv
KPConv copied to clipboard
Feed network with 3d point clouds taken from ZED mini camera
Hi Thomas, Thanks for your insane work and detailed structure. I am having an issue though i would like your opinion about. I am currently using ZED mini camera in order to obtain 3d point clouds from 3d scans. I was wondering, how to manipulate this kind of data in order to match the input point cloud format the network needs. Do i need some kind of pose estimation feature for the point clouds obtained or something relevant?
Hi @Yakamoko,
Thanks for your interest in my work, it depends on what you are trying to achieve. If I understand correctly, you are using a stereo camera to get depth images. Now, how you can use that data depends on what you want to do.
If your goal is to classify each depth image, you can just convert the depth image into a point cloud, which should be very easy with the camera model.
If your goal is to classify a whole scene scanned with this camera, then you need a 3D SLAM algorithm to get the pose of each scan and create a point cloud of the whole scene. At that point, you will be able to feed this point cloud to a KPConv network in the same way as a scene from S3DIS for example.
Hope this helps, Best, Hugues
Hi again Thomas thank you for your fast respond.
I am using ZED stereo camera to scan a room and i am getting some point clouds (zed can do that pretty easily) of the objects in the scene.
My goal is to retrieve objects from dataset that have the minimum distance with the ones scanned, so that i can generate and then project them in a VR environment, in way of real time object generation. I think your 3D Slam suggestion might be the solution, although if something else comes to your mind, an opinion like yours will be more than welcomed.
Have a nice day senpai, Greetings from Greece, Yakamoko