FeatUp
FeatUp copied to clipboard
Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024
FeatUp: A Model-Agnostic Framework for Features at Any Resolution
ICLR 2024
Stephanie Fu*, Mark Hamilton*, Laura Brandt, Axel Feldman, Zhoutong Zhang, William T. Freeman *Equal Contribution.

TL;DR:FeatUp improves the spatial resolution of any model's features by 16-32x without changing their semantics.
https://github.com/mhamilton723/FeatUp/assets/6456637/8fb5aa7f-4514-4a97-aebf-76065163cdfd
Contents
- Install
- Using Pretrained Upsamplers
- Fitting an Implicit Upsampler
- Coming Soon
- Citation
- Contact
Install
Pip
For those just looking to quickly use the FeatUp APIs install via:
pip install git+https://github.com/mhamilton723/FeatUp
Local Development
To install FeatUp for local development and to get access to the sample images install using the following:
git clone https://github.com/mhamilton723/FeatUp.git
cd FeatUp
pip install -e .
Using Pretrained Upsamplers
To see examples of pretrained model usage please see our Collab notebook. We currently supply the following pretrained versions of FeatUp's JBU upsampler:
| Model Name | Checkpoint | Torch Hub Repository | Torch Hub Name |
|---|---|---|---|
| DINO | Download | mhamilton723/FeatUp |
dino16 |
| DINO v2 | Download | mhamilton723/FeatUp |
dinov2 |
| CLIP | Download | mhamilton723/FeatUp |
clip |
| ViT | Download | mhamilton723/FeatUp |
vit |
| ResNet50 | Download | mhamilton723/FeatUp |
resnet50 |
For example, to load the FeatUp JBU upsampler for the DINO backbone:
upsampler = torch.hub.load("mhamilton723/FeatUp", 'dino16')
Fitting an Implicit Upsampler to an Image
To train an implicit upsampler for a given image and backbone first clone the repository and install it for local development. Then run
cd featup
python train_implicit_upsampler.py
Parameters for this training operation can be found in the implicit_upsampler config file.
Coming Soon:
- Training your own FeatUp joint bilateral upsampler
- Simple API for Implicit FeatUp training
- Pretrained JBU models without layer-norms
Citation
@inproceedings{
fu2024featup,
title={FeatUp: A Model-Agnostic Framework for Features at Any Resolution},
author={Stephanie Fu and Mark Hamilton and Laura E. Brandt and Axel Feldmann and Zhoutong Zhang and William T. Freeman},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=GkJiNn2QDF}
}
Contact
For feedback, questions, or press inquiries please contact Stephanie Fu and Mark Hamilton