Pose2ID
Pose2ID copied to clipboard
[CVPR 2025] Official repository for "From Poses to Identity: Training-Free Person Re-Identification via Feature Centralization"
From Poses to Identity: Training-Free Person Re-Identification via Feature Centralization
🔥 A very simple but efficient framework for ReID tasks/models. 🔥
🔥 A powerful pedestrian generation model (IPG) across rgb, infrared, and occlusion scenes. 🔥
[2025-12-03 NEWS!!!]🔥 We launched OmniPerson, an Omni and powerful Pedestrian Generation Model. 🔥

We proposed:
- Training-Free Feature Centralization framework (Pose2ID) that can be directly applied to different ReID tasks and models, even an ImageNet pre-trained model without ReID training;
- Identity-Guided Pedestrian Generation (IPG) paradigm, leveraging identity features to generate high-quality images of the same identity in different poses to achieve feature centralization;
- Neighbor Feature Centralization (NFC) based on sample's neighborhood, discovering hidden positive samples of gallery/query set to achieve feature centralization.

📣 Updates
- [2025.12.03] 🔥🔥🔥 We launched OmniPerson, a powerful Pedestrian Generation Model. (images/videos/infrared/muti-reference) Code is avaliable here
- [2025.03.19] 🔥 A demo of TransReID on Market1501 is available!
- [2025.03.06] 🔥 Pretrained weights is available on HuggingFace!
- [2025.03.04] 🔥 Paper is available on Arxiv!
- [2025.03.03] 🔥 Official codes has released!
- [2025.02.27] 🔥🔥🔥 Pose2ID is accepted to CVPR 2025!
⚒️ Quick Start
There are two parts of our project: Identity-Guided Pedestrian Generation (IPG) and Neighbor Feature Centralization (NFC).
IPG using generated pedestrian images to centralize features. Using simple codes could implement:
'''
normal reid feature extraction to get feats
'''
feats_ipg = torch.zeros_like(feats)
# fuse features of generated positive samples with different poses
for i in range(num_poses):
feats_ipg += reid_model(feats_pose[i]) # Any reid model
eta = 1 # control the impact of generated images (considering the quality)
# centralize features and normalize to original distribution
feats = torch.nn.functional.normalize(feats + eta * feats_ipg, dim=1, p=2) # L2 normalization
'''
compute distance matrix or post-processing like re-ranking
'''
NFC explores each sample's potential positive samples from its neighborhood. It can also implement with few lines:
from NFC import NFC
feats = NFC(feats, k1 = 2, k2 = 2)
Demo for TransReID on Market1501 dataset
- Follow the official instructions of TransReID to install the environment, and run their test script. If it succeeds, then ours are the same.
- Modify configuration file
configs/Market/vit_transreid_stride.yml. Wether use NFC or IPG feature centralization or not.TEST: NFC: True IPG: True - If want to test IPG feature centralization performance, Download the generated images (Gallery & Query) and put them in Market1501 folder. The folder structure should be like:
Market1501 ├── bounding_box_test # original gallery images ├── bounding_box_test_gen # generated gallery images ├── bounding_box_train # original training images ├── query # original query images └── query_gen # generated query images - Run the test script.
Use their official pretrained model or use our pretrained model (without camera ID) on HuggingFace (
transformer_20.pth). If use the model w/o camera ID, please setcamera_numin Line45 oftest.pyto 0.cd demo/TransReID # The same with the official repository python test.py --config_file configs/Market/vit_transreid_stride.yml MODEL.DEVICE_ID "('0')" TEST.WEIGHT 'path/to/your/pretrained/model'
NOTE: If all goes well, you can get the same results of the first two rows in Table.1.
📊 Experiments
ID² Metric
We proposed a quantitative metric (ID²) for Identity Density to replce visualization tools like t-SNE, which is random and only focus on few samples.
It can be used in one line:
from ID2 import ID2
density = ID2(feats, pids) # each ID's density
density.mean(0) # global density
where feats is the features extracted by ReID model and pids is the corresponding person IDs.
Improvements on Person ReID tasks

All the experiments are conducted with the offcial codes and pretrained models. We appreciate their official repositories and great works:
Model without ReID training
TransReID loads a ViT pre-trained model on ImageNet for training on the ReID task. This experiment conduct on that pre-trained model which is NOT trained on ReID task.
Ablation Studies

Random Generated Images

🚀 IPG Installation
Download the Codes
git clone https://github.com/yuanc3/Pose2ID
cd Pose2ID/IPG
Python Environment Setup
Create conda environment (Recommended):
conda create -n IPG python=3.9
conda activate IPG
Install packages with pip
pip install -r requirements.txt
Download pretrained weights
-
Download official models from:
-
Download our IPG pretrained weights from HuggingFace or Google Drive, and put them in the
pretraineddirectory.git lfs install git clone https://huggingface.co/yuanc3/Pose2ID pretrainedThe pretrained are organized as follows.
./pretrained/ ├── denoising_unet.pth ├── reference_unet.pth ├── IFR.pth ├── pose_guider.pth └── transformer_20.pth
Inference
Run the inference.py script. It will generate with poses in the standard_poses for each reference image in ref. The output images will be saved in the output.
python inference.py --ckpt_dir pretrained --pose_dir standard_poses --ref_dir ref --out_dir output
--ckpt_dir: directory of pretrained weights,
--pose_dir: directory of target poses (we provide 8 poses used in our experiment),
--ref_dir: directory of reference images (we provide 10 reference imgs),
--out_dir: directory of output images.
Official generated images on Market1501
Here, we provide our generated images on Gallery and Query of test set on Market1501 with our 8 representative poses.
Getting target poses
We use DWpose to get poses with 18 keypoints.Please follow their official instructions. You may also use other pose estimation methods to get the 18 keypoints poses.
📝 Release Plans
| Status | Milestone | ETA |
|---|---|---|
| 🚀 | Training codes | OmniPerson |
| 🚀 | IPG model trained on more data | OmniPerson |
| 🚀 | IPG model with modality transfer ability (RGB2IR) | OmniPerson |
| 🚀 | Video-IPG model | OmniPerson |
📒 Citation
If you find our work useful for your research, please consider citing the paper:
@inproceedings{yuan2025poses,
title={From poses to identity: Training-free person re-identification via feature centralization},
author={Yuan, Chao and Zhang, Guiwei and Ma, Changxiao and Zhang, Tianyi and Niu, Guanglin},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
pages={24409--24418},
year={2025}
}
or
@article{yuan2025poses,
title={From Poses to Identity: Training-Free Person Re-Identification via Feature Centralization},
author={Yuan, Chao and Zhang, Guiwei and Ma, Changxiao and Zhang, Tianyi and Niu, Guanglin},
journal={arXiv preprint arXiv:2503.00938},
year={2025}
}