Thank you for your work, I would like to ask how to use the custom data set, can I input the point cloud /3D model separately
Thank you for your work, I would like to ask how to use the custom data set, can I input the point cloud /3D model separately
Hi,
These are the cases that are supported with our method, the code uses rgb-d data for scans like scannet which already have them. Our code doesn't support rendering from general paths to use it with our method.
Case 1: PointCloud or Mesh: you need the RGB-D sequence for the point-cloud if you want to use our method.
Case 2: Mesh without RGB-D sequence: In this case you need to render a path (RGB with renderer + depth with rasterizer) of the mesh scene and use it with our method, the results might drop though in this case. You can use either pytorch3D or blender for the rendering since you can control the lighting which is important to reduce the distribution shift that might affect the 2D object detector.
Case 3: PointCloud without RGB-D: this case is not supported by our method.
best
I want to evaluate OpenYOLO3D on CMU Panoptic Dataset. It has both RGB-D and point cloud. How can I use your codebase to generate open vocab 3D segmentation?
@Rajrup , you would have to generate the ground truth instance masks files in this case, similar to the ones generated by Mask3D pre-processing (named instance_gt/validation), which is required for the evaluation script. for the remaining data (RGB-D and ply files), you should structure them to match OpenYOLO3D input format.