Binbin Huang

Results 32 comments of Binbin Huang

Hi, I think it is probably because the number of ray samples scales with the `N_voxel_final` that results in high memory cost. You may try to reduce the `N_voxel_final` or...

Have you tried BaiduYun Drive? It is okay to access the dataset here https://svip-lab.github.io/dataset/iPER_dataset.html

This works for me. ```python cmake . -B build -DCMAKE_BUILD_TYPE=RelWithDebInfo -D CMAKE_CUDA_COMPILER='/usr/local/cuda/bin/nvcc" ```

You should process your images into COLMAP data format following here : https://github.com/graphdeco-inria/gaussian-splatting?tab=readme-ov-file#processing-your-own-scenes Currently it does not support multi-gpu training.

It tells that you are producing 0 vertices mesh, which can be related to the scaling of your data. Maybe you should adjust the `depth_trunc` or you can try `--unbounded`...

Hi, have you tried the scripts I provided of unc and unc+? Or do you modified something else?. Or have you tested the performance using my provided pretrained model to...

Hi, @zhangyr0114 I tested this repo and reproduced the results. I didn't encounter the situation. I guess there could be something wrong with loading the pre-trained BERT weights. Perhaps you...

Hi. What was the error message there? It will be helpful in locating the errors given a detailed description.

Super thanks to @pani-vishal It worths a try. I haven't tested on Windows because I don't have one at hand. I also search some related solutions from 3DGS's repo for...

You can try to upgrade your pip and run `pip install pyproject-toml`