3d-photo-inpainting
3d-photo-inpainting copied to clipboard
LeRes method
I've tried to switch BoostingMonocularDepth to new LeRes
method (instead of MiDas
) and got odd results - major unsync between foreground and background. Any ideas how to fix that?
I'd like to find out how to do this as well
Since there is >>>Note that MiDas-v2 and SGRnet estimate inverse depth while LeReS estimates depth.<<< on BoostingMonocularDepth's readme, so you may add sth like
if algo == 2:
depth=65535.0-depth
into boostmonodepth_utils.py to use depthNet 2 algo.
My colab notebook: https://colab.research.google.com/drive/1fVsU6DUbgO5BkU0ws20A8odkZllYDkht
Patched 3d-photo-inpainting & BoostingMonocularDepth, including ...mesh.node[... => ...mesh.nodes[... patch which stats here, mount on (((gdrive)))/ttmmpp/ML/3d-photo-inpainting. https://drive.google.com/drive/folders/1euIX6aoJ4k1mxQMfIhZ5VWlLSTktEPer
Since there is >>>Note that MiDas-v2 and SGRnet estimate inverse depth while LeReS estimates depth.<<< on BoostingMonocularDepth's readme, so you may add sth like
if algo == 2: depth=65535.0-depth
into boostmonodepth_utils.py to use depthNet 2 algo.
...
Hello @Klanly , I just try to implement LeRes too,
https://github.com/vt-vl-lab/3d-photo-inpainting/pull/188/files
But I feel that the old 3d result(MiDas) is better than LeRes 3d result... is that some code I missed? (I will try your colab later, thank you for the sharing!)
[Update] I have tested your code, we have same result. So I guess this is how LeRes implement can do.