SurroundDepth icon indicating copy to clipboard operation
SurroundDepth copied to clipboard

[CoRL 2022] SurroundDepth: Entangling Surrounding Views for Self-Supervised Multi-Camera Depth Estimation

Results 19 SurroundDepth issues
Sort by recently updated
recently updated
newest added

Hi, I found the number of training datasets is 20096, but the number of offical nuScene datasets is 28119. I am confused why the training number is less than the...

Dear author: Thank you very much for your contributions in this paper! I try to get the depth in evaluation. Theoretically, we can get the metric depth between min_depth and...

Hi,It's a very great work.Can it be deployed on embed system such as Nvidia Jetson AGX?

Hi! I follow the README to prepare DDAD data. But after I perform sift and match operations, I find that the content contained in the sift and match folders is...

Basically we use self-supervised methods to train depth prediction models. Have you tried self-supervised combined with supervised methods? You know, Nuscenes and DDAD datasets have some sparse point cloud. I...

Hi, I use the code "python -m torch.distributed.launch --nproc_per_node 4 run.py --model_name test --config configs/nusc.txt --models_to_load depth encoder --load_weights_folder=/log/nusc/model/weights/ --save_pred_disps --eval_out_dir=/log/nusc/eval/ --eval_only" But there is no picture output. The eval...

Hello, Your results are great, I'm following your work. I have a problem of reproducing the results for the nuScenes dataset, the abs_rel by Monodepth2 for the nuScenes dataset are...

Hi, thank you for nice work! I'm confused about the 'focal_scale'. Why do we need to do this: https://github.com/weiyithu/SurroundDepth/blob/22dfecfe8fca62a38d0f682ff7bf65b41aba3cac/runer.py#L382-L383

Hi, thanks for sharing your great work. I am reading the code of this work. I notice you get the adjacent frame by this line: index_temporal_i = cam_sample['prev'] and index_temporal_i...