DSNeRF icon indicating copy to clipboard operation
DSNeRF copied to clipboard

DSNeRF for inward looking scenes?

Open AndreaMas opened this issue 3 years ago • 8 comments

Does DS-NeRF work also for inward looking scenes?

I've tried to make it work by brutally setting --spherify=True but it gives awful results, unlike for forward looking scenes.

I'm not sure whether I should try changing some parameters or if DS-NeRF is just not meant for this. If so, why?

PS: Thanks very much for your work and code.

AndreaMas avatar Oct 20 '21 12:10 AndreaMas

To explain myself better, I'd like something like this (standard NeRF output):

https://user-images.githubusercontent.com/32450751/141373534-f073f47a-eb3e-4732-8ecb-d28ed448076b.mp4

but I get this (DS-NeRF output):

https://user-images.githubusercontent.com/32450751/141373547-eb637e35-fb8e-4ef5-9096-7db42e75aa3a.mp4

Is there a way I can get better results in these kind of scenes?

AndreaMas avatar Nov 11 '21 21:11 AndreaMas

Hello, is there any update regarding this issue? How to train with a dataset with images of 360 degrees around the object?

erick-alv avatar Jul 12 '22 08:07 erick-alv

I didn't find any solution in the end

AndreaMas avatar Jul 12 '22 09:07 AndreaMas

Maybe for this kind of scenario (few inward-looking views of an object) BARF (Bundle Adgusting NeRF) and RegNeRF (Regularizing NeRF) could work better (never tried though).

AndreaMas avatar Jul 12 '22 09:07 AndreaMas

Hello, I have recently encountered this problem and I can give some explanation(May not be accurate).

The depth supervision used in the paper is based on KL scatter. It defaults the depth on each ray to satisfy the impulse function (single-peaked normal distribution).

But actually when we look at the scene in 360 degrees, rays actually goes through two stages, entering and leaving the surface. This leads to the fact that there should actually be multiple peaks of depth on each ray. But KL forces the depth on the rays to conform to a single-peaked normal distribution, this results in there always being one side missing. For example, in this example,I observe a flowerbottle from 360 degrees. I first let NeRF train 10 epochs to learn a basic geometric structure, and then introduce KL scatter depth loss. image It can be seen that without the introduction of KL scatter, NeRF learns a basic geometric structure.

But when I introduced KL scatter, because KL forces NeRF to recognize only the closest face, this caused many surfaces lossed. image

So if you want to use depth supervision in a 360-degree scene, it's best not to use KL scatter, but to use MSE to calculate the depth loss.

I do not know if my answer is correct, welcome to exchange with me to discuss

YZsZY avatar Dec 06 '22 07:12 YZsZY

When depth is available, direct depth-guided sampling works best

[image: Mailtrack] https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality11& Sender notified by Mailtrack https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality11& 03/03/23, 12:39:58 PM

LYU @.***> 于2023年3月3日周五 12:37写道:

Hello, I have recently encountered this problem and I can give some explanation(May not be accurate).

The depth supervision used in the paper is based on KL scatter. It defaults the depth on each ray to satisfy the impulse function (single-peaked normal distribution).

But actually when we look at the scene in 360 degrees, rays actually goes through two stages, entering and leaving the surface. This leads to the fact that there should actually be multiple peaks of depth on each ray. But KL forces the depth on the rays to conform to a single-peaked normal distribution, this results in there always being one side missing. For example, in this example,I observe a flowerbottle from 360 degrees. I first let NeRF train 10 epochs to learn a basic geometric structure, and then introduce KL scatter depth loss. [image: image] https://user-images.githubusercontent.com/117158282/205849284-e45d2856-e3d2-473d-a0f1-0fcff3e6eb9b.png It can be seen that without the introduction of KL scatter, NeRF learns a basic geometric structure.

But when I introduced KL scatter, because KL forces NeRF to recognize only the closest face, this caused many surfaces lossed. [image: image] https://user-images.githubusercontent.com/117158282/205849726-a2addedb-18c1-4529-bb15-8f7bd1a98711.png

So if you want to use depth supervision in a 360-degree scene, it's best not to use KL scatter, but to use MSE to calculate the depth loss.

I do not know if my answer is correct, welcome to exchange with me to discuss

I set depth_loss=True and sigma_loss=False in config of DSNERF,that stand for only use MSE to calculate the depth loss. depth_loss = img2mse(depth_col, target_depth) but when i try to reconstruction a statue with 360-degree scene input it still donot work. [image: image] https://user-images.githubusercontent.com/73952171/222632411-9465077d-5e7a-418a-8ee9-060dee9b9ad1.png

— Reply to this email directly, view it on GitHub https://github.com/dunbar12138/DSNeRF/issues/18#issuecomment-1452956625, or unsubscribe https://github.com/notifications/unsubscribe-auth/A353DCWNUJZV4HPJHG6ICYTW2FYP7ANCNFSM5GLQQWGA . You are receiving this because you commented.Message ID: @.***>

YZsZY avatar Mar 03 '23 04:03 YZsZY

Thanks for reply, My depth is come from colmap, you mean get depth from RGBD camare? And I wonder how to direct depth-guided sampling? Not use MSE to calculate the depth loss ?

silver-obelisk avatar Mar 03 '23 04:03 silver-obelisk

Refer the work: Dense Prior NeRF, use normal distribution sampling around the depth for the depth-valid regions, and then just render the normal cascade sampling for the depth-valid regions

[image: Mailtrack] https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality11& Sender notified by Mailtrack https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality11& 03/03/23, 12:52:50 PM

LYU @.***> 于2023年3月3日周五 12:49写道:

Thanks for reply, My depth is come from colmap, you mean get depth from RGBD camare? And I wonder how to direct depth-guided sampling? Not use MSE to calculate the depth loss ?

— Reply to this email directly, view it on GitHub https://github.com/dunbar12138/DSNeRF/issues/18#issuecomment-1452964921, or unsubscribe https://github.com/notifications/unsubscribe-auth/A353DCQY5XC3Q7FDRJJ3LMDW2FZ5JANCNFSM5GLQQWGA . You are receiving this because you commented.Message ID: @.***>

YZsZY avatar Mar 03 '23 04:03 YZsZY