DeepV2D
DeepV2D copied to clipboard
Output of Demo scripts
Hi, Thanks for uploading the code for this research paper. I am successfully able to run the demo code for nyu, however the output is a single depth image and same goes for demo_uncalibrated script where the entire video is provided as input. Shouldn't the output be multiple depth maps for different video frames or something similar as written in the paper?
If you run with the --mode=global the depth maps for all frames will be predicted. You can also try the SLAM demos for predicting depth for longer sequences
Hi @zachteed, Thanks for the reply. I ran the --mode=global, however I couldn't find all the depth maps for the frames in the output folder. There is only a single depth.png being created irrespective of what mode i run. Also, for uncalibrated videos is the same and is it possible to provide stereo video? Where would all the output depthmaps be stored?
Regards Aakash
@zachteed Hi, I found that it requires the intrinsics of cameras to run the SLAM demo. Would it work without intrinsics? Just like the demo_uncalibrated.py?