SegAnyGAussians
SegAnyGAussians copied to clipboard
Is it feasible to use the actual mask as input for train_contrastive_feature.py?
I have manually labeled the instance segmentation mask of images. I see that this algorithm uses SAM to predict the mask and train 3DGS separately. So I am wondering if it is possible to directly convert the real mask to the pt format extracted by extract_segment_everything_masks.py, then get mask_scale through get_scale.py, and then feed the real mask and mask_scale into train_contrastive_feature.py for 3DGS feature training to achieve better results?
Yeah, you just need to adjust the mask-scale extraction code to support your mask format. There is no fundamental difference between using SAM-extracted masks and your own annotated masks.