iResNet icon indicating copy to clipboard operation
iResNet copied to clipboard

iResNet

This repository contains the code (in CAFFE) for "Learning for Disparity Estimation through Feature Constancy" paper (CVPR 2018 and ROB 2018) by Zhengfa Liang.

Citation

@article{Liang2018Learning,
  title={Learning for Disparity Estimation through Feature Constancy},
  author={Liang, Zhengfa and Feng, Yiliu and Guo, Yulan and Liu, Hengzhu and Chen, Wei and Qiao, Linbo and Zhou, Li and Zhang, Jianfeng},
  booktitle={Computer Vision and Pattern Recognition},
  year={2018},
}

Contents

  1. Usage
  2. Contacts

Usage

Dependencies

Notes:

make clean
make all -j 12 tools
  • The caffe code in this repository is modiffied from DispNet, which includes the "Correlation1D" layer.

  • The FlowWarp layer is from FlowNet 2.0.

  • We add RandomCrop layer and DataSwitch layer.

  • RandomCrop is used to crop bottom blob to desired width and height, but channel number of this layer is fixed to 7 (left image, right image, and disparity). If the desired width or height is larger than that of bottom blob, we use 128 to fill the first 6 channels, and use NaN to fill the last channel.

layer {  name: "Random_crop_kitti2015"
  type: "RandomCrop"
  bottom: "kitti2015_data"
  top: "kitti2015_cropped_data"
  random_crop_param { target_height: 350  target_width: 694}
}
  • DataSwitch is used to randomly select one of the input bottom blobs as output.
layer {  name: "Random_select_datasets"
  type: "DataSwitch"
  bottom: "MiddleBury_cropped_data"
  bottom: "kitti2015_cropped_data"
  bottom: "eth3d_cropped_data"
  top: "curr_data"
}

Data preparation

Download datasets using the instructions from http://www.cvlibs.net:3000/ageiger/rob_devkit. Put the folder "datasets_middlebury2014" under "CAFFE_ROOT/data". The file structure looks like:

+── CAFFE_ROOT
│   +── data
│       +── datasets_middlebury2014
│           +── metadata
│           +── test
│           +── training

For Scene Flow dataset, we only use the FlyingThings3D subset. Please download RGB cleanpass images and its disparity. The file structure looks like:

+── CAFFE_ROOT
│   +── data
│       +── FlyingThings3D_release
│           +── disparity
│           +── frames_cleanpass

Training

  1. Enter folder "CAFFE_ROOT/data", and use MATLAB to run the script "reshape_dataset.m"

  2. Open terminal, enter folder "CAFFE_ROOT/data", and run the script "make_lmdbs.sh" (replace CAFFE_ROOT first):

sh ./make_lmdbs.sh

Note that, if folder xxxx_lmdb exists, you should first delete this folder, in order to correctly making lmdbs.

  1. Enter folder "CAFFE_ROOT/models/ROB_training", and replace CAFFE_ROOT in the xxxx.prototxt under folder "ROB_training". Then run:
python ../train_rob.py 2>&1 | tee rob.log

Evaluattion

Download the pretrained model from [Pretrained Model], and place it in the folder CAFFE_ROOT/models/model. You need to modify CAFFE_ROOT at line 15 in file "test_rob.py". The results for submission will be stored at CAFFE_ROOT/models/submission_results.

  cd models
  python test_rob.py model/iResNet_ROB.caffemodel

Pretrained Model

CVPR 2018

Scene Flow Starting point for fine-tuning kitti KITTI 2015
Baiduyun Baiduyun Baiduyun

ROB 2018

Scene Flow Final model
Baiduyun Baiduyun

Contact

[email protected]