Lahav Lipson
Lahav Lipson
We did not evaluate our method on a held-out partition of the MPI Sintel Stereo dataset. What is your training set up (e.g. hyper parameters, training/validation set)?
I get EPE: 17.3603 % of EPE > 3px: 13.0238 using 32 disparity updates I'm not sure what is the difference between **flow_gt** and **disp** in your code.
It should, yes. You may need to rotate/flip the images so that the disparity is always negative, which is how our model was trained.
If it would be helpful, I can add a google colab notebook for RAFT-Stereo.
You could certainly do that. It might reduce flickering and/or speed up inference when all pairs of images are coming from the same binocular video. We didn't add this functionality...
You can use `--train_iters` and `--valid_iters` to set the number of GRU iterations.
This should answer your question: https://github.com/princeton-vl/RAFT/issues/24
In demo.py, you can use the `--save_numpy` flag to save the output to a file. Is this helpful?
I observed this as well, especially near object boundaries which end up getting smoothed by the network. The results look a lot cleaner if you filter the points using forward-backward...
Any network that uses an endpoint error loss for training will probably smooth over some object boundaries, so I'm not sure. Also the point cloud may look better (but sparser)...