Multi-Task-Transformer
Multi-Task-Transformer copied to clipboard
Code of ICLR2023 paper "TaskPrompter: Spatial-Channel Multi-Task Prompting for Dense Scene Understanding" and ECCV2022 paper "Inverted Pyramid Multi-task Transformer for Dense Scene Understanding"
:fire: [ICLR2023, ECCV2022, TPAMI2024] Powerful Multi-Task Transformers for Scene Understanding
:scroll: Introduction
This repository provides codes and models for two powerful multi-task transformer models for scene understanding. Please check the following pages for details:
Hanrong Ye and Dan Xu, TaskPrompter: Spatial-Channel Multi-Task Prompting for Dense Scene Understanding. ICLR 2023
Hanrong Ye and Dan Xu, Inverted Pyramid Multi-task Transformer for Dense Scene Understanding. ECCV 2022
Cite
BibTex:
@InProceedings{invpt2022,
title={Inverted Pyramid Multi-task Transformer for Dense Scene Understanding},
author={Ye, Hanrong and Xu, Dan},
booktitle={ECCV},
year={2022}
}
@InProceedings{taskprompter2023,
title={TaskPrompter: Spatial-Channel Multi-Task Prompting for Dense Scene Understanding},
author={Ye, Hanrong and Xu, Dan},
booktitle={ICLR},
year={2023}
}
@article{ye2023invpt++,
title={InvPT++: Inverted Pyramid Multi-Task Transformer for Visual Scene Understanding},
author={Ye, Hanrong and Xu, Dan},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2024}
}
Please do consider :star2: star our project to share with your community if you find this repository helpful!
Contact
Please contact Hanrong Ye if any questions.
Related Project
Few-show learning of multiple tasks: Visual Token Matching (ICLR 2023 Outstanding Paper Award)