AR-Depth icon indicating copy to clipboard operation
AR-Depth copied to clipboard

Depth outputs all blank

Open meder411 opened this issue 5 years ago • 15 comments

I am running the code with the sample data that ships with the repo. Speed issues aside, all of the depth outputs I have seen so far (~25 frames) are all the same empty image. Each pixel in the output PNG is [68, 1, 84, 255]. Furthermore, I have to see the solver converge. Each frame runs 500 iterations and quits.

Am I doing something wrong? The intermediate images all look fine. The sparse depth images are a bit weak though. All I've done is run the code as is.

Here are the input, output and intermediate results for frame 39:

Input 000039

Output 000039

Image gradient image_gradient_000039

Flow flow_0_000039

Flow gradient

Reliability reliability_1_000039

Soft edges soft_edges_000039

Solver initialization solver_initialization_000039

Solver smoothness solver_smoothness_y_000039

Sparse points sparse_points_000039

meder411 avatar Jun 12 '19 20:06 meder411

After looking further into the problem, I haven't come up with a solution yet, but I've diagnosed the source. it appears the issue arises when applying the TemporalMedian function. For frame 14, the input and outputs are below.

INPUT 000014

UNFILTERED DEPTH 000014

FILTERED DEPTH (after applying TemporalMedian) 000014 png_filtered

meder411 avatar Jun 13 '19 21:06 meder411

Thanks for bringing this to my attention. I'm looking into this now...

holynski avatar Jun 13 '19 23:06 holynski

I'm not able to reproduce your error. Running the code as-is produces the following filtered depth map for frame 14:

000014

Have you changed the code somehow in your version? If so, can you share the specific changes you've made?

holynski avatar Jun 14 '19 00:06 holynski

Hm. Interesting. The only thing I did was to copy the code from an Jupyter notebook to a Python script. I am also running it in a conda environment too. Perhaps one of those two factors is playing a role? I will take a further look as well and report back too.

meder411 avatar Jun 14 '19 00:06 meder411

Are you able to produce the correct output from within the Jupyter notebook?

holynski avatar Jun 14 '19 00:06 holynski

No, I can't. It must be something to do with the conda environment I guess? It runs with no errors, but the filtered depth is still empty for some reason (when running in a Jupyter notebook). I am using opencv 4.1, opencv-contrib-python 4.1.0.25, and pyquaternion 0.9.5, running on a Ubuntu 18.04 server with Python 3.7.3.

meder411 avatar Jun 14 '19 21:06 meder411

Here's my setup:

OSX 10.14.3 Python 3.7.2 (via conda) OpenCV 4.0.0 SciPy 1.2.1 PyQuaternion 0.9.5 NumPy 1.15.4

To open and run the code, I start a Python kernel with jupyter notebook, and then navigate to the notebook, and press "Run All".

If indeed there is an issue within the TemporalMedian function, I'm not sure what would cause it, apart from a change in the np.median function. Could you tell me the size of the array (depth_maps) passed to the TemporalMedian function?

Just to double check -- you do mean frame 000014.png, and not the 14th frame which is saved, correct? This is first frame which is saved if skip_frames is set to 0.

Also, for sanity -- you haven't modified the COLMAP files, right?

holynski avatar Jun 14 '19 22:06 holynski

I do mean frame00014 and I haven't modified the COLMAP files at all. It is odd though, because you're right, TemporalMedian is basically just a numpy function and that's a fairly boilerplate operation that's unlike to be the cause. Interestingly, the subsequent solver initialization images all look great. I'm thinking there's a visualization issue somewhere for me.

Solver initialization of frame000015.png after frame000014.png's output was blank image

meder411 avatar Jun 14 '19 23:06 meder411

Ah, that's definitely possible. Let me look into that code.

holynski avatar Jun 14 '19 23:06 holynski

Can you inspect the min/max values of the filtered depth map, and compare it to that of the solver initialization?

It could be an issue with the automatic scaling.

holynski avatar Jun 14 '19 23:06 holynski

Hm, results below:

==>Processing frame000014.png
Initialization min: 0.5802239
Initialization max: 1.266863
Raw depth min: 0.80136484
Raw depth max: Inf
Filtered depth min: 0.80136484
Filtered depth max: Inf
==>Processing frame000015.png
Initialization min: 0.5802239
Initialization max: 1.266863
Raw depth min: 0.80864733
Raw depth max: Inf
Filtered depth min: 0.81061256
Filtered depth max: Inf

I believe you are right. It's the Inf's that are screwing it up it seems--throwing off the scale. I haven't found the specific source of those yet, but it's I'm fairly sure there's a divide-by-zero error somewhere.

meder411 avatar Jun 14 '19 23:06 meder411

Yep - very likely a divide-by-zero. I've actually seen debug warnings on my end for a while, but never had the time to investigate. Let's see if I can find it now.

holynski avatar Jun 14 '19 23:06 holynski

Found it at the end of the DensifyFrame function, when copying the solver disparities back into the depth map.

            depth[col,row] = 1.0 / x[col + row * w]

Will push a fix shortly.

holynski avatar Jun 16 '19 00:06 holynski

Found it at the end of the DensifyFrame function, when copying the solver disparities back into the depth map.

            depth[col,row] = 1.0 / x[col + row * w]

Will push a fix shortly.

Did you have time to fix this yet? I ran into the exact same problem as described in this issue.

HeCraneChen avatar Sep 01 '21 00:09 HeCraneChen

Found it at the end of the DensifyFrame function, when copying the solver disparities back into the depth map.

            depth[col,row] = 1.0 / x[col + row * w]

Will push a fix shortly.

Did you have time to fix this yet? I ran into the exact same problem as described in this issue.

NVM, can be fixed by simply adding a condition if x[col + row * w] != 0.0

HeCraneChen avatar Sep 01 '21 04:09 HeCraneChen