Zach Teed
Zach Teed
The 1/5000 and 1/1000 come from the fact that depth maps for scannet and nyu are saved as 16bit png images and not floating points. You need to multiply by...
This issue seemed to be caused by the PyTorch update. I've updated the environment.yaml file, which I believe solves the issue.
Do you have a GPU? If you don't, you might be able to run on a CPU by changing (line 18) DEVICE = 'cuda' to DEVICE = 'cpu' in the...
Hi, you need to run with the `--small` flag, for example `python demo.py --model=models/raft-small.pth --path=demo-frames --small`
Sure, I will add a demo for online tracking and mapping later this week. The demo code is already doing simultaneous pose estimation and mapping just over a small video...
Hi, I just added a new demo showing how DeepV2D can be used as a SLAM system on NYU.
Hi, you should be able to recover the absolute scale of translation on the KITTI dataset. You may need to scale the outputs by 10, because the output units on...
Hi, currently it only estimates the focal length and not distortion parameters. You will probably need to estimation the distortion parameters with some other method
I've tested with cuda 10.2 and 11.1, but I have not tried with 10.1. Could you post the error message you are getting when trying to compile?
Hi, we unroll a single step during training (1 motion update and 1 depth update). This is end-to-end in the sense that we can backprogate the gradient on the depth...