SegmentAnythingin3D icon indicating copy to clipboard operation
SegmentAnythingin3D copied to clipboard

Can we test this model based on the real world data made by ourselves?

Open Zing110 opened this issue 1 year ago • 3 comments

Hi, this is an amazing work. I want to use this technique to segment the real-world data made by myself, and how to achieve it. Is there any tutorial to preprocess our own data?

Zing110 avatar Feb 22 '24 09:02 Zing110

Hello, it is possible to segment your own data with SA3D.

You may try to process your data with colmap to estimate the camera parameters and store the processed data like the data structure introduced in our README. Then you can follow our provided instruction to train and segment the NeRF.

Jumpat avatar Feb 23 '24 02:02 Jumpat

Oh, thank you. Does this mean that if I use colmap to get the transform.json, that's enough for me to train my data? Do I need to downscale images to get images_x? If I need to, how do I get them, and what rules should I follow? Moreover, should I write a specific configure file like fern.py to make training correctly run?

Zing110 avatar Feb 24 '24 07:02 Zing110

Yes, the images_xs are not necessary, except that your images are too large.

You do need the config file for training on your own data. If your images are forward-facing then check the configs for the llff dataset and if they are 360-degree you can check the configs in nerf_unbounded.

Jumpat avatar Feb 24 '24 12:02 Jumpat