Cross-View-Gait-Deep-CNNs
Cross-View-Gait-Deep-CNNs copied to clipboard
A Comprehensive Study on Cross-View Gait Based Human Idendification with Deep CNNs
Cross-View Gait Based Human Idendification with Deep CNNs
A pytorch implementation of the Local-Bottom Network (LB) in the paper:
Wu, Zifeng, et al. "A comprehensive study on cross-view gait based human identification with deep cnns." IEEE transactions on pattern analysis and machine intelligence 39.2 (2017): 209-226.
Dependency
Model
- In
src/model.pythere are two models: LBNet and LBNet_1. LBNet_1 is more close to the model described in the section 4.2.1 of the original paper. You can select either one. The results are close to each other.
Training
- To train the model, put the CASIA-B dataset silhoutte data under repository
mkdir snapshotto build the directory for saving models- goto the
srcdir and run
python3 train.py
The model will be saved into the execution dir every 10000 iterations. You can change the interval in train.py.
Monitor the performance
- Install visdom.
- Start the visdom server with
python3 -m visdom.server -port 5274or any port you like (change the port in train.py and test.py) - Open this URL in your browser:
http://localhost:5274You will see the training loss curve and the validation accuracy curve.
Testing
- goto
srcdir and runpython3 test.py. You can select which snapshot to use by modifying thecheckpoint = th.load('../snapshot/snapshot_75000.pth')to other snapshots. Be patient since it takes a long time. The computed similarities will be saved intosimilarity.npy - run
python3 compute_acc_per_angleto compute the accuracy for each prove view and gallery view. The results will be saved intoacc_table.csv
You will get a table like this.
LBNet

LBNet_1
