sc_depth_pl
sc_depth_pl copied to clipboard
Is it a toy dataset provided in the readme? It only contains 15k frames.
It is the full dataset. I did experiments on the current data, and you can reproduce the results on it.
Thanks for your quick reply!
I have successfully reproduced your results as listed in the reademe. But there may be a slight difference from the paper, but it doesn't hurt.
I run the
test.py
with [nyu_scv2_model], and found the depth prediction at the bottom of the image is anomalous. Is this the reason why the performance is not the same as mentioned in the paper?
#--------------------------------------------------------------------------------------------------------------------------------------------------------------#
#--------------------------------------------------------------------------------------------------------------------------------------------------------------#
By the way, Can you give some suggestion about how to train the
sc_depth.py
with the dataset provided in this repo?
I have successfully reproduced the results of kitti_scv1_model
, but failed to train a Ours-DP
version. I tried making some changes, but the model still doesn't converge.
#--------------------------------------------------------------------------------------------------------------------------------------------------------------#
#--------------------------------------------------------------------------------------------------------------------------------------------------------------#
One more question. This dataset(as metioned 'directly on original data with ARN') is indeed smaller than what you gave earlier(pre-processed 67K rectified pairs). Shouldn't it be bigger than that?
[the bottom of the image is anomalous.] It is the issue of images of NYU datasets. You can find every image has a white board.
[but failed to train a Ours-DP version] Indeed you do not need to do that because the ARN version is better than DP. DP is just used to verify the idea, and it is not an elegant method. If you want to propose new methods based on this paper, you can only work on the ARN version.
[pre-processed 67K rectified pairs] Should not. 67K pairs are constructed by pairing each image with their neighboring 20 images (some are deleted). It is just a data preprocessing method. You can directly on original video data by using ARN and achieve similar results.
Hello! I met the same problems as you when I ran the test.py with nyu_scv2_model, I couldn't get the right results as the paper.
Do you know how to resolve it? Thanks!