MELTR
MELTR copied to clipboard
MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models (CVPR 2023)
MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models
This is the official implementation of MELTR (CVPR 2023). (arxiv)
Dohwan Ko1*, Joonmyung Choi1*, Hyeong Kyu Choi1, Kyoung-Woon On2, Byungseok Roh2, Hyunwoo J. Kim1.
1Korea University 2Kakao Brain

Code Repositories
Citation
@inproceedings{ko2023meltr,
title={MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models},
author={Ko, Dohwan and Choi, Joonmyung and Choi, Hyeong Kyu and On, Kyoung-Woon and Roh, Byungseok and Kim, Hyunwoo J},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2023}
}