imagefusion-LRRNet
imagefusion-LRRNet copied to clipboard
LRRNet (IEEE TPAMI 2023), Python 3.7, Pytorch >=1.8
LRRNet: A Novel Representation Learning Guided Fusion Framework for Infrared and Visible Images
Accetped by IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), DOI: 10.1109/TPAMI.2023.3268209
Hui Li, Tianyang Xu, Xiao-Jun Wu*, Jiwen Lu, Josef Kittler
paper, Arxiv, Supplemental materials1, Supplemental materials2

Platform
Python 3.7
Pytorch >= 1.8
Training Dataset
KAIST (S. Hwang, J. Park, N. Kim, Y. Choi, I. So Kweon, Multispectral pedestrian detection: Benchmark dataset and baseline, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1037–1045.) is utilized to train LRRNet.
VGG-16 modal
The difference between LRRNet and other architectures

Learnable LRR block

LRRNet - Fusion framework

LLRR block for RGBT tracking - framework

If you have any question about this code, feel free to reach me([email protected], [email protected])
Citation
@article{li2023lrrnet,
title={{LRRNet: A novel representation learning guided fusion framework for infrared and visible images}},
author={Li, Hui and Xu, Tianyang and Wu, Xiao-Jun and Lu, Jiwen and Kittler, Josef},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
volume={45},
number={9},
pages={11040-11052},
year={2023},
publisher={IEEE}
}