HyperE2VID icon indicating copy to clipboard operation
HyperE2VID copied to clipboard

HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks (IEEE Transactions on Image Processing, 2024)

HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks

Hugging Face Spaces arxiv.org PWC PWC

This is the official repository of our IEEE TIP paper HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks by Burak Ercan, Onur Eker, Canberk Sağlam, Aykut Erdem, and Erkut Erdem.

HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks

In this work we present HyperE2VID, a dynamic neural network architecture for event-based video reconstruction. Our approach extends existing static architectures by using hypernetworks and dynamic convolutions to generate per-pixel adaptive filters guided by a context fusion module that combines information from event voxel grids and previously reconstructed intensity images. We show that this dynamic architecture can generate higher-quality videos than previous state-of-the-art, while also reducing memory consumption and inference time.

Overview of our proposed HyperE2VID architecture

  • Our HyperE2VID paper has been accepted by IEEE Transactions on Image Processing.
  • For more details please see our paper.
  • For qualitative results please see our project website.
  • For more results and experimental analyses of HyperE2VID, please see the interactive result analysis tool of EVREAL.
  • Model codes are published under the model folder in this repository.
  • The pretrained model of HyperE2VID can be found here.
  • For evaluation and analysis of HyperE2VID model, please use the codes in EVREAL repository.
  • Instructions to generate training data can be found in the datagen folder.
  • Training codes will be published soon.

Citations

If you use code in this repo in an academic context, please cite the following:

@article{ercan2024hypere2vid,
title={{HyperE2VID}: Improving Event-Based Video Reconstruction via Hypernetworks},
author={Ercan, Burak and Eker, Onur and Saglam, Canberk and Erdem, Aykut and Erdem, Erkut},
journal={IEEE Transactions on Image Processing},
year={2024},
volume={33},
pages={1826--1837},
doi={10.1109/TIP.2024.3372460},
publisher={IEEE}}

Acknowledgements

  • This work was supported in part by KUIS AI Center Research Award, TUBITAK-1001 Program Award No. 121E454, and BAGEP 2021 Award of the Science Academy to A. Erdem.
  • This code borrows from or is inspired by the following open-source repositories:
    • https://github.com/uzh-rpg/rpg_e2vid
    • https://github.com/TimoStoff/event_cnn_minimal
    • https://github.com/ZeWang95/ACDA