Donny Chen
Donny Chen
Hi @boxuLibrary, I think it's probably because the baseline between the input views is too wide. When training on RE10K, we assume there are **enough overlaps** between the input source...
Hi @abhigoku10 , we appreciate your interest in our work. We have never attempted similar datasets before. As stated in our paper, Sem2NeRF mainly intends to consider "taking as input...
Hi @nanasylum, thanks for your appreciation. May I confirm whether you can get the correct rendered outputs when testing with the released pre-trained weight? If you still get black images...
Hi @dodododddo, thanks for your appreciation. Our multi-view Transformer is adopted from [UniMatch](https://github.com/autonomousvision/unimatch). To extend from two-view to multi-view, we set the attention `K`, `V` as `N-1` views rather than...
Hi, @trungphien, sorry for the late reply. They should be mathematically equivalent. The main difference is that the formula is in Homogeneous coordinates, while the implementation is in Cartesian coordinates....
Hi @Caesar-T , thanks for your interest in this project. I am not the author of the paper, but I also discovered years ago that the mask combination implementation in...
Hi @trungphien, thanks for your interest. Regarding deploying MVSplat on lightweight machines, kindly refer to related suggestions at https://github.com/donydchen/mvsplat/issues/78#issuecomment-2492591632.
Hi @Warrior456, thanks for your interest in our work. I just ran a test using the newest released code on my machine, and the scores matched precisely with our paper...
Hi @Warrior456, glad to know that my previous suggestion helped. * I feel MVSNeRF (Anpei Chen et al. ICCV21) might struggle with wide-baseline data since it relies on one single...
Hi @kevinhuangxf, thanks for your appreciation. Normally, the current setting should automatically utilize all available GPUs for training. I'm not sure what might be causing this issue. You could try...