EDN icon indicating copy to clipboard operation
EDN copied to clipboard

[IEEE TIP 2022] Code of EDN: Salient Object Detection via Extremely-Downsampled Network

EDN

IEEE TIP 2022, EDN: Salient Object Detection via Extremely-Downsampled Network

中文版下载地址:中文版

If you run into any problems or feel any difficulties to run this code, do not hesitate to leave issues in this repository.

My e-mail is: wuyuhuan @ mail.nankai (dot) edu.cn

:fire: News! We updated the code with P2T transformer bacbone. It achieves much higher result than the original EDN with ResNet-50! you can download the saliency maps and pretrained model from github release of this repository.

This repository contains:

  • [x] Full code, data for training and testing
  • [x] Pretrained models based on VGG16, ResNet-50, P2T-Small and MobileNetV2
  • [x] Fast preparation script (based on github release)

Requirements

  • python 3.6+
  • pytorch >=1.6, torchvision, OpenCV-Python, tqdm
  • Tested on pyTorch 1.7.1

Simply using:

pip install -r requirements.txt

to install all requirements.

Run all steps quickly

Simply run:

bash one-key-run.sh

It will download all data, evaluate all models, produce all saliency maps to salmaps/ folder, and train EDN-Lite automatically. Note that this script requires that you have a good downloading speed on GitHub.

Data Preparing

You can choose to use our automatic preparation script, if you have good downloading speed on github:

bash scripts/prepare_data.sh

The script will prepare the datasets, imagenet-pretrained models, and pretrained models of EDN/EDN-Lite. If you suffer from slow downloading rate and luckily you have a proxy, a powerful tool Proxychains4 can help you to execute the script through your own proxy by running the following command: proxychains4 bash scripts/prepare_data.sh.

If you have a low downloading speed, please download the training data manually:

We have processed the data well so you can use them without any preprocessing steps. After completion of downloading, extract the data and put them to ./data/ folder:

unzip SOD_datasets.zip -O ./data

Demo

We provide some examples for quick run:

python demo.py

Train

If you cannot run bash scripts/prepare_data.sh, please first download the imagenet pretrained models and put them to pretrained folder:

It is very simple to train our network. We have prepared a script to train EDN-Lite:

bash ./scripts/train.sh

To train EDN-VGG16 or EDN-R50, you need to change the params in scripts/train.sh. Please refer to the comments in the last part of scripts/train.sh for more details (very simple).

Test

Pretrained Models

Download them from the following urls if you did not run bash scripts/prepare_data.sh to prepare the data:

Generate Saliency Maps

After preparing the pretrained models, it is also very simple to generate saliency maps via EDN-VGG16/EDN-R50/EDN-Lite/EDN-LiteEX:

bash ./tools/test.sh

The scripts will automatically generate saliency maps on the salmaps/ directory.

  • For computing Fbw, S-m, and E-m measures, please use the official MATLAB code to generate the results: Download Code Here.

Pre-computed Saliency maps

For covenience, we provide the pretrained saliency maps on several datasets by:

  • Running the command bash scripts/prepare_salmaps.sh to download them to salmaps folder.
  • Or downloading them manually: [Google Drive], [Baidu Pan, c9zm]
  • Now we have included all saliency maps of EDN varies, including EDN-VGG16, EDN-ResNet-50, EDN-P2T-Small, EDN-Lite, and EDN-LiteEX.

Others

TODO

Contact

  • I encourage everyone to contact me via my e-mail. My e-mail is: wuyuhuan @ mail.nankai (dot) edu.cn

License

The code is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License for NonCommercial use only.

Citation

If you are using the code/model/data provided here in a publication, please consider citing our works:

@ARTICLE{wu2022edn,
  title={EDN: Salient object detection via extremely-downsampled network},
  author={Wu, Yu-Huan and Liu, Yun and Zhang, Le and Cheng, Ming-Ming and Ren, Bo},
  journal={IEEE Transactions on Image Processing},
  year={2022}
}

@ARTICLE{wu2021mobilesal,
  author={Wu, Yu-Huan and Liu, Yun and Xu, Jun and Bian, Jia-Wang and Gu, Yu-Chao and Cheng, Ming-Ming},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={MobileSal: Extremely Efficient RGB-D Salient Object Detection}, 
  year={2021},
  doi={10.1109/TPAMI.2021.3134684}
}