HoHoNet
HoHoNet copied to clipboard
"HoHoNet: 360 Indoor Holistic Understanding with Latent Horizontal Features" official pytorch implementation.
What's the unit of the depth graph? and What's the unit of the 3d point cloud?
Hi thank you for your work and writing such a well documented repositories. I wanted to ask if this repo had the same ground truth format as what is referenced...
Hi, thanks for your excellent work and repo. Question for you -- how are you defining your spherical coordinate system? From the code below, it seems you have a reflection...
Added the Anaconda environment configuration to help users install the packages needed.
I can't find the code in BiFuse repository, please remind me the direct location.
Hi,I use your given pth file and your yaml I use test_depth.py file, ckpt/s2d3d_depth_HOHO_depth_dct_efficienthc_TransEn1/ep60.pth config/s2d3d_depth/HOHO_depth_dct_efficienthc_TransEn1.yaml {'mre': array(0.10142188), 'mae': array(0.2026864), 'rmse': array(0.38335027), 'rmse_log': array(0.06684125), 'log10': array(0.04376619), 'delta_1': array(0.90537266), 'delta_2': array(0.96934565), 'delta_3':...
The color map used in the notebook does not correspond to the color map of the paper. This makes it hard to check if the segmentation works in other examples.
I have downloaded the Stanford2D3D dataset. but in the ground truth image of depth like below:  Could you let me know how you got this result in your paper?...
input image: 'assets/pano_asmasuxybohhcj.png' get the sem result:  and my code is: import os import argparse import importlib import cv2 from natsort import natsorted import numpy as np import torch...