RAFT-Stereo icon indicating copy to clipboard operation
RAFT-Stereo copied to clipboard

Question about the result on middlebury

Open superxi opened this issue 2 years ago • 1 comments

Hi, Thanks open source for such a great job

I tried eval on MiddleEval3 training dataset with your model raftstereo-middlebury.pth , but the result is poor than scoreboard showed on https://vision.middlebury.edu/stereo/eval3/. How can I get the same precision on the website ?

I used default parameters in evaluate_stereo.py, The command used and the result are shown below. image

superxi avatar Jul 29 '22 03:07 superxi

These results you generated are the same that were submitted to the middlebury training dataset scoreboard. The difference is in the evaluation, which I believe prioritizes difficult image regions like those "with fine detail and/or lack of texture."

See: https://vision.middlebury.edu/stereo/eval3/MiddEval3-newFeatures.html

lahavlipson avatar Jul 29 '22 16:07 lahavlipson