Zheng Chen

Results 14 comments of Zheng Chen

I have the same question. The inverse depth output is between 900 and 10000. Is the problem resolved?

Now I also add a bias and a scale to the output like this (but not to [0, 1]): https://github.com/KU-CVLAB/DaRF/blob/47b2d1a23d13f0d149e55cf8fd2195ec42093d1e/plenoxels/models/dpt_depth.py#L87C18-L87C18

@isJHan See the supplementary of [RichDreamer](https://arxiv.org/pdf/2311.16918v2.pdf) in Page 13-14 (Sec A.2). It shows the general normalization methods

I also met the similar problem. How can I solve it? @YESAndy Hi! Did you use pointnav task dataset v0.1 for HM3Dv0.2? Does it work for you?

A single-panoramic image cannot be taken as the input of our method. because the 360-degree MVSNet needs at least two image inputs and their poses.

If you want to estimate the depth of single-image, 360 monocular depth network is much better, such as UniFuse, BiFuse, HRDFuse. [Hrdfuse: Monocular 360deg depth estimation by collaboratively learning holistic-with-regional...

1. It doesn't contain the training data I used for PanoGRF. mp3d is preprocessed for my baselines(NeuRay, IBRNet). Replica is also used for testing in PanoGRF and my baselines(NeuRay and...

See the guidance in the first part of README.md. There is no unique script for preprocessing MP3D. The preprocessing code is fused in the dataloader file for 360 MVS depth...