GenHowTo
GenHowTo copied to clipboard
Code for the paper "GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos" published at CVPR 2024
GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos
[Project Website :dart:] [Paper :page_with_curl:] [Code :octocat:]
This repository contrains code for the CVPR'24 paper GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos.
Run the model on your images and prompts
-
Environment setup
- Use provided
Dockerfileto build the environment (docker build -t genhowto .) or install the packages manually (pip install diffusers==0.18.2 transformers xformers accelerate). - The code was tested with PyTorch 2.0.
- Use provided
-
Download GenHowTo model weights
- Use
download_weights.shscript or download the GenHowTo weights manually. - We provide the following weights:
GenHowTo-STATES-96h-v1for generating state transformations.GenHowTo-ACTIONS-96h-v1for generating actions.
- Use
-
Get predictions
- Run the following command to get predictions for your image and prompt.
python genhowto.py --weights_path weights/GenHowTo-STATES-96h-v1 --input_image path/to/image.jpg --prompt "your prompt" --output_path path/to/output.jpg --num_images 1 [--num_steps_to_skip 2] --num_steps_to_skipis the number of steps to skip in the diffusion process. The higher the number, the more similar the generated image will be to the input image.
- Run the following command to get predictions for your image and prompt.
Evaluation
To replicate our evaluation, please follow the instructions in the evaluation directory.
Citation
@inproceedings{soucek2024genhowto,
title={GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos},
author={Sou\v{c}ek, Tom\'{a}\v{s} and Damen, Dima and Wray, Michael and Laptev, Ivan and Sivic, Josef},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2024}
}
Acknowledgements
This work was partly supported by the EU Horizon Europe Programme under the project EXA4MIND (No. 101092944) and the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90140). Part of this work was done within the University of Bristol’s Machine Learning and Computer Vision (MaVi) Summer Research Program 2023. Research at the University of Bristol is supported by EPSRC UMPIRE (EP/T004991/1) and EPSRC PG Visual AI (EP/T028572/1).