About custom data
Hello, I'd like to know how to process my own dataset, consisting of only images. If I follow the format used in the "on the go" dataset, can I directly convert all the images into "clutter" and then train on it, or do I need to manually separate the frames with dynamic scenes? I've tried labeling all the images as "clutter", but direct training doesn't allow for good separation of dynamic objects. Could you please advise on the correct approach?
In our ablation study, we ran with varying ratios of cluttered images. As the proportion of cluttered images increases, a slight performance drop is expected. However, our method maintains relative robustness compared to the baselines. The extent of clutter also influences the results. Additionally, using COLMAP initialization generally produces more accurate results than random initialization.
I do not have information about your dataset, such as the size of the cluttered regions or the total number of images. You may also consider adjusting parameters such as the number of initial distractor Gaussians (num_dynamic_points) or the ADC of distractor Gaussians (refine_every_dyn) , as the current settings are tuned for the default datasets, which typically contain 100–200 images.
I hope this can help!