TriVol icon indicating copy to clipboard operation
TriVol copied to clipboard

Question about Point-NeRF comparisons in the paper

Open youngju-na opened this issue 2 years ago • 2 comments

Hi, first of all, thanks for sharing this great work!

I have a question about Point-NeRF comparison results in the paper.

To my knowledge, Point-NeRF requires input images and builds initial neural point clouds with MVSNet that has not only 3D position but also image features (F dim) as an embedding (N, 3+F).

However, your method takes point clouds (N, 3). So my question is how did you build neural point clouds for Point-NeRF? Did you additionally use 2D images as an input?

Thanks in advance.

youngju-na avatar Jul 27 '23 08:07 youngju-na

@tau-yihouxiang

youngju-na avatar Aug 14 '23 07:08 youngju-na

Hi, first of all, thanks for sharing this great work!

I have a question about Point-NeRF comparison results in the paper.

To my knowledge, Point-NeRF requires input images and builds initial neural point clouds with MVSNet that has not only 3D position but also image features (F dim) as an embedding (N, 3+F).

However, your method takes point clouds (N, 3). So my question is how did you build neural point clouds for Point-NeRF? Did you additionally use 2D images as an input?

Thanks in advance.

Hi, first of all, thanks for sharing this great work!

I have a question about Point-NeRF comparison results in the paper.

To my knowledge, Point-NeRF requires input images and builds initial neural point clouds with MVSNet that has not only 3D position but also image features (F dim) as an embedding (N, 3+F).

However, your method takes point clouds (N, 3). So my question is how did you build neural point clouds for Point-NeRF? Did you additionally use 2D images as an input?

Thanks in advance.

Point-NeRF can be trained using only Points (xyz), but the training & inference performance is much worse than "xyz + F" .

forestsen avatar Nov 13 '23 16:11 forestsen