FocusOnDepth icon indicating copy to clipboard operation
FocusOnDepth copied to clipboard

A Monocular depth-estimation for in-the-wild AutoFocus application.

Results 15 FocusOnDepth issues
Sort by recently updated
recently updated
newest added

Bumps [numpy](https://github.com/numpy/numpy) from 1.17.4 to 1.22.0. Release notes Sourced from numpy's releases. v1.22.0 NumPy 1.22.0 Release Notes NumPy 1.22.0 is a big release featuring the work of 153 contributors spread...

dependencies

can not change to type depth in conifig as will no work get this error RuntimeError: Error(s) in loading state_dict for FocusOnDepth: Unexpected key(s) in state_dict: "head_segmentation.head.0.weight", "head_segmentation.head.0.bias", "head_segmentation.head.2.weight", "head_segmentation.head.2.bias",...

Hi, Thank you very much for the training script. I am currenlty trying to train using SUNRGBD dataset for both depth prediction and semantic segmentation. I have added multiple classes...

Hi, Instead of vit_base_patch16_384, I'm trying to use vit_base_patch32_224 in the segmentation branch. I did change "patch_size":32 and "transforms":{ "resize":224, "p_flip":0.5, "p_crop":0.3, "p_rot":0.2 }, But its crashing for some reason....

Hi, I am trying to make use of the DPT model, however I would like to make some changes to the patch embedding layer. You have created the model using...

Hi, I am trying to download the datasets using the link "view on Kaggle" but failed. How can I obtain the datasets? Thanks for your help!

Thanks for the training code of DPT I wonder the accuracy or metric of the trained code, Is this performance similar to the original paper code?

Hi, when I ran our model on nyuv2, I just noticed that predicted segmentation map is always completely black image while depth map is grayish color.

Hello, thanks for the great code. I have a question about the dataset split. 1) Does the split code generate validation dataset by taking **random** samples in the ratio which...

Hi, I'm trying to modify the loss by adding a confidence weight for each pixel. I don't understand why do you multiply translated mask: ``` mask_x = torch.mul(mask[:, :, 1:],...