EfficientSOD
EfficientSOD copied to clipboard
This research is based on efficient saliency detection using deformable convolutions.
Densely Deformable Efficient Salient Object Detection Network
Paper Link: https://arxiv.org/abs/2102.06407
Contents
- Introduction
- The proposed DDNet
- Requirements and how to run?
- Quantitative and qualitative comparisons
- Citation and acknowledgements
Introduction
In this paper, inspired by the best background/foreground separation abilities of deformable convolutions, we employ them in our Densely Deformable Network (DDNet) to achieve efficient SOD. The salient regions from densely deformable convolutions are further refined using transposed convolutions to optimally generate the saliency maps. Quantitative and qualitative evaluations using the recent SOD dataset against 22 competing techniques show our method’s efficiency and effectiveness.
The proposed DDNet
The proposed DDNet uses three main blocks to generate optimal saliency. Firstly, two dense convolution blocks represent lowlevel features of the input RGB images. Then we propose densely connected deformable convolutions to learn effective features of salient regions and their corresponding boundaries. Finally, we employ transpose convolution and upsampling to generate the resultant saliency image, refer to the figure below:
Requirements and how to run?
For the trained models and high-resolution images, please visit: https://drive.google.com/drive/folders/1aigSE0nLKfYlAbl9CIk4mSpCxneS2fUw?usp=sharing
Make a folder TrainedModels in the same repository and download the pretrained DDNet weights and model from the above link^.
Quantative and Visual comparisons
Citation and acknowledgements
@misc{hussain2021densely,
title={Densely Deformable Efficient Salient Object Detection Network},
author={Tanveer Hussain and Saeed Anwar and Amin Ullah and Khan Muhammad and Sung Wook Baik},
year={2021},
eprint={2102.06407},
archivePrefix={arXiv},
primaryClass={cs.CV}
}