CoVR
CoVR copied to clipboard
Official PyTorch implementation of the paper "CoVR: Learning Composed Video Retrieval from Web Video Captions".
CoVR: Composed Video Retrieval
Learning Composed Video Retrieval from Web Video Captions
Lucas Ventura · Antoine Yang · Cordelia Schmid · Gül Varol
Composed Image Retrieval (CoIR) has recently gained popularity as a task that considers both text and image queries together, to search for relevant images in a database. Most CoIR approaches require manually annotated datasets, comprising image-text-image triplets, where the text describes a modification from the query image to the target image. However, manual curation of CoIR triplets is expensive and prevents scalability. In this work, we instead propose a scalable automatic dataset creation methodology that generates triplets given video-caption pairs, while also expanding the scope of the task to include composed video retrieval (CoVR). To this end, we mine paired videos with a similar caption from a large database, and leverage a large language model to generate the corresponding modification text. Applying this methodology to the extensive WebVid2M collection, we automatically construct our WebVid-CoVR dataset, resulting in 1.6 million triplets. Moreover, we introduce a new benchmark for CoVR with a manually annotated evaluation set, along with baseline results. Our experiments further demonstrate that training a CoVR model on our dataset effectively transfers to CoIR, leading to improved state-of-the-art performance in the zero-shot setup on both the CIRR and FashionIQ benchmarks. Our code, datasets, and models are publicly available.
Description
This repository contains the code for the paper "CoVR: Learning Composed Video Retrieval from Web Video Captions".
Please visit our webpage for more details.
This repository contains:
📦 covr
┣ 📂 configs # hydra config files
┣ 📂 src # Pytorch datamodules
┣ 📂 tools # scrips and notebooks
┣ 📜 .gitignore
┣ 📜 LICENSE
┣ 📜 README.md
┣ 📜 test.py
┗ 📜 train.py
Installation :construction_worker:
Create environment
conda create --name covr
conda activate covr
To install the necessary packages, you can use the provided requirements.txt file:
python -m pip install -r requirements.txt
Alternatively, you can manually install the following packages inside the conda environment:
python -m pip install pytorch_lightning --upgrade
python -m pip install hydra-core --upgrade
python -m pip install lightning
python -m pip install einops
python -m pip install pandas
python -m pip install opencv-python
python -m pip install timm
python -m pip install fairscale
python -m pip install tabulate
python -m pip install transformers
The code was tested on Python 3.8 and PyTorch 2.0.
Download the datasets
WebVid-CoVR
To use the WebVid-CoVR dataset, you will have to download the WebVid videos and the WebVid-CoVR annotations.
To download the annotations, run:
bash tools/scripts/download_annotation.sh covr
To download the videos, install mpi4py
and run:
python tools/scripts/download_covr.py <split>
CIRR
To use the CIRR dataset, you will have to download the CIRR images and the CIRR annotations.
To download the annotations, run:
bash tools/scripts/download_annotation.sh cirr
To download the images, follow the instructions in the CIRR repository. The default folder structure is the following:
📦 CoVR
┣ 📂 datasets
┃ ┣ 📂 CIRR
┃ ┃ ┣ 📂 images
┃ ┃ ┃ ┣ 📂 train
┃ ┃ ┃ ┣ 📂 dev
┃ ┃ ┃ ┗ 📂 test1
FashionIQ
To use the FashionIQ dataset, you will have to download the FashionIQ images and the FashionIQ annotations.
To download the annotations, run:
bash tools/scripts/download_annotation.sh fiq
To download the images, the urls are in the FashionIQ repository. You can use the this script to download the images. Some missing images can also be found here. All the images should be placed in the same folder (datasets/fashion-iq/images
).
(Optional) Download pre-trained models
To download the checkpoints, run:
bash tools/scripts/download_pretrained_models.sh
Usage :computer:
Computing BLIP embeddings
Before training, you will need to compute the BLIP embeddings for the videos/images. To do so, run:
# This will compute the embeddings for the WebVid-CoVR videos.
# Note that you can use multiple GPUs with --num_shards and --shard_id
python tools/embs/save_blip_embs_vids.py --video_dir datasets/WebVid/2M/train --todo_ids annotation/webvid-covr/webvid2m-covr_train.csv
# This will compute the embeddings for the WebVid-CoVR-Test videos.
python tools/embs/save_blip_embs_vids.py --video_dir datasets/WebVid/8M/train --todo_ids annotation/webvid-covr/webvid8m-covr_test.csv
# This will compute the embeddings for the CIRR images.
python tools/embs/save_blip_embs_imgs.py --image_dir datasets/CIRR/images/
# This will compute the embeddings for FashionIQ images.
python tools/embs/save_blip_embs_imgs.py --image_dir datasets/fashion-iq/images/
Training
The command to launch a training experiment is the folowing:
python train.py [OPTIONS]
The parsing is done by using the powerful Hydra library. You can override anything in the configuration by passing arguments like foo=value
or foo.bar=value
. See Options parameters section at the end of this README for more details.
Evaluating
The command to evaluate is the folowing:
python test.py test=<test> [OPTIONS]
Options parameters
Datasets:
-
data=webvid-covr
: WebVid-CoVR datasets. -
data=cirr
: CIRR dataset. -
data=fashioniq
: FashionIQ dataset.
Tests:
-
test=all
: Test on WebVid-CoVR, CIRR and all three Fashion-IQ test sets. -
test=webvid-covr
: Test on WebVid-CoVR. -
test=cirr
: Test on CIRR. -
test=fashioniq
: Test on all three Fashion-IQ test sets (dress
,shirt
andtoptee
).
Checkpoints:
-
model/ckpt=blip-l-coco
: Default checkpoint for BLIP-L finetuned on COCO. -
model/ckpt=webvid-covr
: Default checkpoint for CoVR finetuned on WebVid-CoVR. -
model/ckpt=fashioniq-all-ft_covr
: Default checkpoint pretrained on WebVid-CoVR and finetuned on FashionIQ. -
model/ckpt=cirr_ft-covr+gt
: Default checkpoint pretrained on WebVid-CoVR and finetuned on CIRR.
Training
-
trainer=gpu
: training with CUDA, changedevices
to the number of GPUs you want to use. -
trainer=ddp
: training with Distributed Data Parallel (DDP), changedevices
andnum_nodes
to the number of GPUs and number of nodes you want to use. -
trainer=cpu
: training on the CPU (not recommended).
Logging
-
trainer/logger=csv
: log the results in a csv file. Very basic functionality. -
trainer/logger=wandb
: log the results in wandb. This requires to installwandb
and to set up your wandb account. This is what we used to log our experiments. -
trainer/logger=<other>
: Other loggers (not tested).
Machine
-
machine=server
: You can change the default path to the dataset folder and the batch size. You can create your own machine configuration by adding a new file inconfigs/machine
.
Experiment
There are many pre-defined experiments from the paper in configs/experiments
. Simply add experiment=<experiment>
to the command line to use them.
Citation
If you use this dataset and/or this code in your work, please cite our paper:
@article{ventura23covr,
title = {{CoVR}: Learning Composed Video Retrieval from Web Video Captions},
author = {Lucas Ventura and Antoine Yang and Cordelia Schmid and G{\"u}l Varol},
journal = {AAAI},
year = {2024}
}
Acknowledgements
Based on BLIP and lightning-hydra-template.