DeepLiDAR icon indicating copy to clipboard operation
DeepLiDAR copied to clipboard

Prediction result seems abnormal.

Open Hub-Tian opened this issue 5 years ago • 16 comments

Thanks for your wonderful work! I encountered some problem while visualizing the results on kitti using your pretrained model. I ploted the results ('pred') from test.py(pred, time_temp = test(imgL, sparse, mask)). The visualization results is abnormal. Any advice on this? 图片 I used the image from the training set from 2D detection and get the sparse lidar depth map by projecting the lidar point cloud into image plane.

Hub-Tian avatar Sep 08 '19 06:09 Hub-Tian

I think it is due to the depth of sky region is wrong, you could crop some top regions of the output dense depth.

JiaxiongQ avatar Sep 09 '19 07:09 JiaxiongQ

Thanks for you reply! Cropping the top regions will alleviate this problem, however, this "ray-like" case also happens near the edge of objects. It seems to be caused by the continuous depth prediction at the boundary of objects where the depth should "jump" in reality. Should some post-processing be taken to the "pred" from the "test" function (pred, time_temp = test(imgL, sparse, mask))? I also wonder how did you plot the figure 1 in you paper? I am trying to get the same results like yours. 图片

Hub-Tian avatar Sep 09 '19 07:09 Hub-Tian

Because our result is not smooth, you can filter the dense depth by traditional filters such as the median filter. To get the map in our paper, you can ask Yinda Zhang for help. Thanks for your attention!

JiaxiongQ avatar Sep 09 '19 08:09 JiaxiongQ

Hi, this is an impressive work, however some questions appear in my mind.

  1. I run the code on the KITTI (train or val) with your trained model like below description. 图片
  2. the result is below INPUT: [lidar_raw] 0000000005 [gt] 0000000005 [rgb] 0000000000

OUTPUT 0000000005

I want to know what it is , is it the depth map? and how can I get the correct depth map? thanks!

nowburn avatar Sep 17 '19 06:09 nowburn

the torchvision version must be 0.2.0

On Tue, Sep 17, 2019 at 2:34 PM NowBurn [email protected] wrote:

Hi, this is an impressive work, however some questions appear in my mind.

  1. I run the code on the KITTI (train or val) with your trained model like below description. [image: 图片] https://user-images.githubusercontent.com/19162375/65016427-da09c900-d956-11e9-9a21-0fe8ab5479cc.png
  2. the result is below INPUT: [lidar_raw] [image: 0000000005] https://user-images.githubusercontent.com/19162375/65016877-eb070a00-d957-11e9-838c-a4f9eefffec2.png [gt] [image: 0000000005] https://user-images.githubusercontent.com/19162375/65016966-230e4d00-d958-11e9-8d4c-a42dd59034f4.png [rgb] [image: 0000000000] https://user-images.githubusercontent.com/19162375/65016826-cad74b00-d957-11e9-940c-05cf0aceee12.png

OUTPUT [image: 0000000005] https://user-images.githubusercontent.com/19162375/65016570-32d96180-d957-11e9-816f-05c70bfcf770.png

I want to know what it is , is it the depth map? and how can I get the correct depth map? thanks!

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/JiaxiongQ/DeepLiDAR/issues/5?email_source=notifications&email_token=AJANJRCKIH7PNDPXHE7PLUTQKB27JA5CNFSM4IUSLXN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD63OXTI#issuecomment-532081613, or mute the thread https://github.com/notifications/unsubscribe-auth/AJANJRHNDQX57T2RI5AV4WLQKB27JANCNFSM4IUSLXNQ .

JiaxiongQ avatar Sep 17 '19 06:09 JiaxiongQ

BTW, how long does it need to train the model from scratch? And what kind of GPU and how many of it did you use to train the model?

Hub-Tian avatar Sep 17 '19 06:09 Hub-Tian

We used 3 GeForce GTX 1080 Ti GPUs and it takes about 3 days.

JiaxiongQ avatar Sep 17 '19 08:09 JiaxiongQ

@nowburn I have the same problem as yours. When I use the test.py with pretrained model, the evaluation results shows abnormal. rmse:7998.173 irmse:2.1443906 mae:4290.926 imae:0.2070867 The first dense map shows: 2011_09_26_drive_0002_sync_image_0000000005_image_02

I wonder if it's related to the pytorch version. My pytorch version is 1.0.1.

junweifu avatar Sep 17 '19 14:09 junweifu

you'd better use the environment that our equirements described

JiaxiongQ avatar Sep 18 '19 01:09 JiaxiongQ

@junweifu Thanks to the author's reply, It will work with the environment that author's requirements described

nowburn avatar Sep 18 '19 02:09 nowburn

@JiaxiongQ Thanks for your reply before, and I want to know how to evaluate the metrics like 'rmse', the kitti website says that they don't accept evaluation which is informal. I use your code to computer the rmse, [input] prediction 0000000005 gt:depth_annotated 0000000005

  1. I computer their 'rmse', and the result is 123.375, is this way right? After all the prediction is dense depth map while depth_annotated is sparse.
  2. Is it possible to get the dense ground truth depth map like the prediction?

Thanks!

nowburn avatar Sep 18 '19 06:09 nowburn

  1. This value might be wrong, we compute the 'rmse' on the pixels where both gt and prediction have positive values. 2.I think it is hard to get the dense gt depth map in the outdoor scene based on present sensors.

JiaxiongQ avatar Sep 18 '19 08:09 JiaxiongQ

Thank you for your advice. I find the torchvision version cause this kind of problem.

junweifu avatar Sep 19 '19 04:09 junweifu

@JiaxiongQ Thank you for your help. I use the official devkit tools to evaluate the results of pretrained model. The results are shown as follow: mean mae: 0.215136
mean rmse: 0.687001 mean inverse mae: 0.00109365 mean inverse rmse: 0.00250434 mean log mae: 0.0123438 mean log rmse: 0.0269894 mean scale invariant log: 0.0267794 mean abs relative: 0.0124689 mean squared relative: 0.0011126 Is the evaluation method from official devkit tools as the same as you do? Do those results seem normal?

One depth completion is shown as follow: 2011_09_26_drive_0002_sync_groundtruth_depth_0000000005_image_02

2011_09_26_drive_0002_sync_groundtruth_depth_0000000005_image_02

junweifu avatar Sep 19 '19 09:09 junweifu

I think they seem normal

JiaxiongQ avatar Sep 19 '19 11:09 JiaxiongQ

@JiaxiongQ OK, thanks~~~

junweifu avatar Sep 19 '19 12:09 junweifu