mzy97
mzy97
enviroment: pytorch 1.5.1 cuda 10.1 test on small input tensor (2,8,5,5) when using the test method in lib/sa/functions to test the speed, I found that the corresponding implementation using pytorch...
In the code below, b_x is the baseline from camera #i to camera #0. it is not related to this transformation, why you add it to the focal length? https://github.com/kuixu/kitti_object_vis/blob/d54807c70894c43f711870a99c062d43c55fd6e8/kitti_util.py#L294...
> Please see here for code: https://gist.github.com/ranftlr/1d6194db2e1dffa0a50c9b0a9549cbd2 > > We've never tried this loss without the ReLU at the end, it is possible that this influences the training dynamic. Hi,...
I want to train the model on my own dataset using your scale and shift invariant loss, but the ground truth depth is sparse (about 40% pixels are valid), will...
In the dataset page, it said, LIDAR: 2 x SICK LMS-151 2D LIDAR, 270° FoV, 50Hz, 50m range, 0.5° resolution 1 x SICK LD-MRS 3D LIDAR, 85° HFoV, 3.2° VFoV,...
thank you for the great job. where can i found the supplementary material?
In your paper, it said: evaluate the model with exhaustively choose all points in the 3m ⇥ 1.5m ⇥ 1.5m cube in a sliding window fashion through the xy-plane with...
Thank you for sharing this wonderful work, When I go to the nyu repo, ican't find the nyu pretrained weight. Can you provide it?
Hi, about processing depth, I have two questions 1. In script above, how _RAW_TEST_TIMESTAMPS generated, and what chunk is used for ? 2. How to choose the _TIME_SPAN, when the...
In the issue https://github.com/TRI-ML/packnet-sfm/issues/163, you said packnet-san use packnet slim as rgb encoder decoder, but when comparing PackNetSlim01.py and PackNetSAN01.py, I found that there some change in PackNetSAN01.py that PackNetSAN01...