UniHead
UniHead copied to clipboard
Unifying Visual Perception by Dispersible Points Learning (ECCV 2022)
UniHead
Official code for "Unifying Visual Perception by Dispersible Points Learning". The implementation is based on United-Percepion.
Introduction
UniHead is a plug-in perception head which can be used in different detection frameworks (two-stage or one-stage pipelines), and different tasks (image classification, object detection, instance segmentation and pose estimation).

Guide to Our Code
Currently, configs can be found in configs/unihead.
Experiments on MS-COCO 2017
Our original implementation is based on the unreleased internal detection framework so there may be a small performance gap.
Different Detection Pipelines
| Pipeline | mAP | Config | Model |
|---|---|---|---|
| two-stage | 42.0 | config | |
| cascade | 42.8 | config |
Different Tasks
| Task | mAP | Config | Model |
|---|---|---|---|
| detection | 42.0 | config | |
| instance segmentation | 30.3 | config | |
| pose estimation | 57.6 | config |
More results and models will soon be released.
LICENSE
This project is released under the MIT license. Please see the LICENSE file for more information.
Citation
@article{liang2022unifying,
author = {Jianming Liang, Guanglu Song, Biao Leng, Yu Liu},
journal = {arXiv:2208.08630},
title = {Unifying Visual Perception by Dispersible Points Learning},
year = {2022},
}