NeRFCapture icon indicating copy to clipboard operation
NeRFCapture copied to clipboard

How to use captured depth images?

Open jgnooo opened this issue 2 years ago • 9 comments

Hello, Thank you for the great work!!

I'm using NeRFCapture app for obtaining RGB-D images and poses. However, when generating point cloud using these data, It is something weird.

image

I think it is because of depth image scale. How to resolve it, and What is the scale of the depth map from the NeRFCapture?

Thank you 🥹

jgnooo avatar Jul 11 '23 07:07 jgnooo

The depth image is upscaled quite a bit. I think its something in the order of 250x120 (not sure what the exact number is). I think what you are seeing is mostly the interpolation.

jc211 avatar Jul 11 '23 07:07 jc211

Hi, I have a problem converting the captured depth image to real depth. The depth map is 3 times larger than the color image and I don't know how to extract the true depth correctly. I have attached relevant images for your reference, can you help me with that ? ref.zip

garylidd avatar Aug 30 '23 09:08 garylidd

Looks like the offline depth is not working properly. I'll try and fix it soon. Sorry about that.

jc211 avatar Aug 31 '23 23:08 jc211

As a work-around, we (Spectacular AI), have released an alternative offline-focused iOS data collection app that can record both depth and high-resolution video at high FPS. Its outputs can be used for InstantNGP, Nerfstudio (step-by-step tutorial) and probably even SplaTAM (see here).

oseiskar avatar Dec 07 '23 09:12 oseiskar

I would also recommend looking into polycam app, they enable a developer mode that allows for getting rgb + depth + camera poses/intrinsic parameters.

https://github.com/PolyCam/polyform

this may simplify the need for having to use a network connection

pablovela5620 avatar Dec 07 '23 13:12 pablovela5620

Any update on the fix for the offline version?

nitthilan avatar Jan 04 '24 01:01 nitthilan

Hey sorry I have not had the chance to look at this yet. However, if anyone is compiling from source and would like to have a go at fixing it, these are the lines that need changing:

https://github.com/jc211/NeRFCapture/blob/312cb01efd5bd84e30cf9dea32a3a12a70abbc5b/NeRFCapture/DatasetWriter.swift#L173C1-L176C18

The problem is that depthBuffer!.pngData() does not work and should be replaced by a manual operation that converts the depth buffer into a 16 bit number that can be written to a png.

jc211 avatar Jan 12 '24 00:01 jc211

I'm not familiar with processing pixels on IOS. So I save depth as binary and process it on PC with Python. https://github.com/Zhangyangrui916/NeRFCapture/commit/3707209eff4da752469c12a58330b38543ad6055

Zhangyangrui916 avatar Feb 26 '24 10:02 Zhangyangrui916

any update on this bug?

Xenthio avatar Oct 15 '25 07:10 Xenthio