CSP-Pedestrian-detection icon indicating copy to clipboard operation
CSP-Pedestrian-detection copied to clipboard

A faster pytorch implementation of 'Center and Scale Prediction (CSP) for pedestrain detection (CVPR19)'.

Center and Scale Prediction (CSP) for pedestrian detection

Introduction

This is the unofficial pytorch implementation of High-level Semantic Feature Detection: A New Perspective for Pedestrian Detection. CSP is an effective and efficient method for pedestrian detector and achieves promising results on the CityPersons dataset. We implement CSP in pytorch based on previous works offical code (keras), unofficial code (pytorch). Compared with them, our codes have following features:

  • Support Apex Mix Precision
  • Support Distributed and Non-distributed training
  • Support more backbones, such as ResNet, DLA-34, HRNet

We obtain much faster training/inference speed (3 hours for 120 epoches using two gpus) and comparable performance. We think CSP is a strong baseline for pedestrian detection, and it still has much room for improvement. We will continuously update this repo, and add some useful tricks (e.g. data augmentation) for better performance.

models

Model Reasonable Heavy Occlusion All Training time Link
ResNet-50 11.30 41.09 37.55 ~5 hours BaiduYun(code:v61g)
DLA-34 11.12 43.00 37.32 ~3 hours
HRNet-18 10.24 37.72 36.15 ~11 hours
HRNet-32 9.69 36.48 35.47 ~13 hours
HRNet-32 + SWA 9.66 34.61 34.86 BaiduYun(code:v61g)

Note: Training time is evaluated in two 2080Ti GPUs for 120 epochs. We will further tune some hyperparameters (e.g. learning rate, batchsize) these days, then will release our models.

Get Start

Prerequisites

Installation

git clone [email protected]:ligang-cs/CSP-Pedestrain-detection.git
cd CSP-Pedestrian-detection/utils
make all

Data preparation

You need to download the CityPersons dataset.

Your directory tree should be look like this:

$root_path/
├── images
│   ├── train
│   └── val
├── annotations
│   ├── anno_train.mat
│   └── anno_val.mat

Train and test

Please specify the configuration file.

Distributed training

CUDA_VISIBLE_DEVICES=<gpus_ids> python -m torch.distributed.launch --nproc_per_node <gpus_number> trainval_distributed.py --work-dir <save_path> 

Non-distributed training

CUDA_VISIBLE_DEVICES=<gpus_ids> python trainval.py --work-dir <save_path>

Test

CUDA_VISIBLE_DEVICES=<gpus_ids> python test.py --val-path <checkpoint_path> --json-out <results_path>

Contact

If you have any questions, please do not hesitate to contact Li Gang ([email protected]).

We also appreciate all contributions to improve this repo.

Acknowledgement

Many thanks to them !