se-cff
se-cff copied to clipboard
Official implementation of "Stereo Depth from Events Cameras: Concentrate and Focus on the Future" (CVPR 2022)
SE-CFF
[S]tereo depth from [E]vents Cameras: [C]oncentrate and [F]ocus on the [F]uture
This is an official code repo for "Stereo Depth from Events Cameras: Concentrate and Focus on the Future" CVPR 2022 Yeong-oo Nam*, Mohammad Mostafavi*, Kuk-Jin Yoon and Jonghyun Choi (Corresponding author)
If you use any of this code, please cite both following publications:
@inproceedings{nam2022stereo,
title = {Stereo Depth from Events Cameras: Concentrate and Focus on the Future},
author = {Nam, Yeongwoo and Mostafavi, Mohammad and Yoon, Kuk-Jin and Choi, Jonghyun},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Patter Recognition},
year = {2022}
}
@inproceedings{mostafavi2021event,
title = {Event-Intensity Stereo: Estimating Depth by the Best of Both Worlds},
author = {Mostafavi, Mohammad and Yoon, Kuk-Jin and Choi, Jonghyun},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages = {4258--4267},
year = {2021}
}
Maintainers
- Yeong-oo Nam
- Mohammad Mostafavi
Table of contents
-
Pre-requisite
- Hardware
- Software
- Dataset
- Getting started
- Training
-
Inference
- Pre-trained model
- What is not ready yet
- Benchmark website
- Related publications
- License
Pre-requisite
The following sections list the requirements for training/evaluation the model.
Hardware
Tested on:
- CPU - 2 x Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz
- RAM - 256 GB
- GPU - 8 x NVIDIA A100 (40 GB)
- SSD - Samsung MZ7LH3T8 (3.5 TB)
Software
Tested on:
Dataset
Download DSEC datasets.
📂 Data structure
Our folder structure is as follows:
DSEC
├── train
│ ├── interlaken_00_c
│ │ ├── calibration
│ │ │ ├── cam_to_cam.yaml
│ │ │ └── cam_to_lidar.yaml
│ │ ├── disparity
│ │ │ ├── event
│ │ │ │ ├── 000000.png
│ │ │ │ ├── ...
│ │ │ │ └── 000536.png
│ │ │ └── timestamps.txt
│ │ └── events
│ │ ├── left
│ │ │ ├── events.h5
│ │ │ └── rectify_map.h5
│ │ └── right
│ │ ├── events.h5
│ │ └── rectify_map.h5
│ ├── ...
│ └── zurich_city_11_c # same structure as train/interlaken_00_c
└── test
├── interlaken_00_a
│ ├── calibration
│ │ ├── cam_to_cam.yaml
│ │ └── cam_to_lidar.yaml
│ ├── events
│ │ ├── left
│ │ │ ├── events.h5
│ │ │ └── rectify_map.h5
│ │ └── right
│ │ ├── events.h5
│ │ └── rectify_map.h5
│ └── interlaken_00_a.csv
├── ...
└── zurich_city_15_a # same structure as test/interlaken_00_a
Getting started
Build docker image
git clone [repo_path]
cd event-stereo
docker build -t event-stereo ./
Run docker container
docker run \
-v <PATH/TO/REPOSITORY>:/root/code \
-v <PATH/TO/DATA>:/root/data \
-it --gpus=all --ipc=host \
event-stereo
Build deformable convolution
cd /root/code/src/components/models/deform_conv && bash build.sh
Training
cd /root/code/scripts
bash distributed_main.sh
Inference
cd /root/code
python3 inference.py \
--data_root /root/data \
--checkpoint_path <PATH/TO/CHECKPOINT.PTH> \
--save_root <PATH/TO/SAVE/RESULTS>
Pre-trained model
:gear: You can download pre-trained model from here
What is not ready yet
Some modules introduced in the paper are not ready yet. We will update it soon.
- Intensity image pre-processing code.
- E+I Model code.
- E+I train & test code.
- Future event distillation code.
Benchmark website
The DSEC website holds the benchmarks and competitions.
:rocket: Our CVPR 2022 results (this repo), are available in the DSEC website. We ranked better than the state-of-the-art method from ICCV 2021
:rocket: Our ICCV 2021 paper Event-Intensity Stereo: Estimating Depth by the Best of Both Worlds ranked first in the CVPR 2021 Competition hosted by the CVPR 2021 workshop on event-based vision and the Youtube video from the competition.
Related publications
-
Event-Intensity Stereo: Estimating Depth by the Best of Both Worlds - Openaccess ICCV 2021 (PDF)
-
E2SRI: Learning to Super Resolve Intensity Images from Events - TPAMI 2021 (Link)
-
Learning to Super Resolve Intensity Images from Events - Openaccess CVPR 2020 (PDF)
License
MIT license.