ssl_for_fgvc icon indicating copy to clipboard operation
ssl_for_fgvc copied to clipboard

Self-Supervised Learning for Fine-Grained Image Categorization

Self-Supervised Learning for Fine Grained Image Categorization

The repository contains the implementation of the project "Self Supervised Learning for Fine-grained Categorization". The project examines the effectiveness of various SSL methods for a FGVC problem. The repository implements self-supervision as an auxiliary task to a baseline model for fine-grained visual categorization (FGVC) task. Specifically, it provides the implementation for rotation [1], pretext invariant representation learning (PIRL) [2] and destruction and construction learning (DCL) [3] as auxiliary tasks for the baseline model.

CAM Visualization

Available Models

The list of implemented model architectures can be found at here.

Pipeline Configuration

All the functionalities of this repository can be accessed using a .yml configuration file. The details related to the configuration parameters can be found at here.

We also provide sample configuration files at ./config/* for each implemented method as listed below.

  1. Baseline Config
  2. SSL Rotation Config
  3. SSL PIRL Config
  4. SSL DCL Config

Dependencies

  • Ubuntu based machine with NVIDIA GPU is required to run the training and evaluation. The code has been developed on a machine having Ubuntu 18.04 LTS distribution with one 24GB Quadro RTX 6000 GPU.
  • Python 3.8.
  • Pytorch 1.7.1 and corresponding torchvision version.

Installation

It is recommended to create a new conda environment for this project. The installation steps are as follows:

  1. Create new conda environment and activate it.
$ conda create --name=ssl_for_fgvc python=3.8
$ conda activate ssl_for_fgvc
  1. Install requirements as,
$ pip install -r requirements.txt

Evaluating Pretrained Models

All the pretrained models can be found at click_me. In order to evaluate a model, download the model checkpoints from the link and use scripts/evaluate.py script for evaluating the model on the test set.

$ cd scripts
$ python evaluate.py --config_path=<path to the corresponding configuration '.yml' file.> \
--model_checkpoints=<path to the downloaded model checkpoints> \
--root_dataset_path=<path to the dataset root directory>

If the --root_dataset_path command line parameter has not been provided to evaluate.py script, it will download the dataset and perform the testing. The downloading of data may take some time based on the network stability and speed. For more information run,

$ python evaluate.py --help

Sample Input and Expected Output

For example, in order to evaluate the DCL model, download the corresponding checkpoints (let's say in the scripts directory as ssl_dcl/best_checkpoints.pth) and run the following commands.

$ cd scripts
$ python evaluate.py --config_path=../config/ssl_dcl.yml --model_checkpoints=./ssl_dcl/best_checkpoints.pth

The expected outputs after running the command are given below.

Evaluation Outputs for DCL Model

Training Models from Scratch

The end-to-end training functionality can be accessed using the main.py script. The script takes pipeline config (.yml) file as command line parameter and initiates the corresponding training.

$ python main.py --config_path=<path to the corresponding configuration '.yml' file.>

For more information run,

$ python main.py --help

Sample Input and Expected Output

For example, to train a DCL model run,

$ python main.py --config_path=./config/ssl_dcl.yml

The expected outputs after running the command are given below.

Training Outputs for DCL Model

CAM Visualization

The repository also provides the functionality to generate class activation maps (CAMs) for the trained model on the whole test dataset. The script scripts/cam_visualizations.py exposes this functionality. Run the following commands to generate CAMs for the trained model.

$ cd scripts
$ python cam_visualizations.py --config_path=<path to the corresponding configuration '.yml' file.> \
--model_checkpoints=<path to the downloaded model checkpoints> \
--root_dataset_path=<path to the dataset root directory> \
--output_directory=<path to ouput directory to save the visualizations>

If the parameter --root_dataset_path is not provided, the program will automatically download the dataset and generate the visualizations. For more information run,

$ python cam_visualizations.py --help

Docker

We also provide Dockerfile for containerization and docker-compose.yml file for running the training as service.

Follow the below steps to run the training as a service,

  1. Install docker dependencies using install_docker_dependencies.sh.
$ cd scripts
$ bash install_docker_dependencies.sh
  1. Create docker image by running the following command from the root repository directory,
$ docker build -t ssl_for_fgvc:v1.0

Where ssl_for_fgvc:v1.0 is the docker image name.

  1. Run the training as docker-compose service by running,
$ docker-compose up -d
  1. View the training logs by running,
$ docker-compose logs -f ssl_for_fgvc

Acknowledgements

  1. Dataloader: https://github.com/TDeVries/cub2011_dataset
  2. Rotation Based SSL Task: https://github.com/valeoai/BF3S
  3. PyContrast for basic PIRL Implementation: https://github.com/HobbitLong/PyContrast
  4. VISSL for Pretrained Contrastive SSL Models: https://github.com/facebookresearch/vissl
  5. Official Barlow Twins Implementation https://github.com/IgorSusmelj/barlowtwins
  6. DCL Official Implementation: https://github.com/JDAI-CV/DCL
  7. Grad-CAM Visualizations: https://github.com/jacobgil/pytorch-grad-cam

Reference

[1] Gidaris, Spyros, et al. "Boosting few-shot visual learning with self-supervision."
[2] Misra, Ishan, and Laurens van der Maaten. "Self-supervised learning of pretext-invariant representations."
[3] Chen, Yue, et al. "Destruction and construction learning for fine-grained image recognition."
[4] Sun, Guolei, et al. "Fine-grained recognition: Accounting for subtle differences between similar classes."