video_features
video_features copied to clipboard
Extract video features from raw videos using multiple GPUs. We support RAFT flow frames as well as S3D, I3D, R(2+1)D, VGGish, CLIP, and TIMM models.
Video Features

video_features
allows you to extract features from raw videos in parallel with multiple GPUs.
It supports several extractors that capture visual appearance, optical flow, and audio features.
See more details in Documentation.
Quick Start
or run with conda
locally:
# clone the repo and change the working directory
git clone https://github.com/v-iashin/video_features.git
cd video_features
# install environment
conda env create -f conda_env_torch_zoo.yml
# load the environment
conda activate torch_zoo
# extract r(2+1)d features for the sample videos
python main.py \
feature_type=r21d \
device_ids="[0]" \
video_paths="[./sample/v_ZNVhz7ctTq0.mp4, ./sample/v_GGSY1Qvo990.mp4]"
# use `device_ids="[0, 2]"` to run on 0th and 2nd devices in parallel
# or add `cpu=true` to use CPU
If you are more comfortable with Docker, there is a Docker image with a pre-installed environment that supports all models. Check out the Docker support. documentation page.
Supported models
Action Recognition
Sound Recognition
Optical Flow
Image Recognition
Language-Image Pretraining
Used in
Please, let me know if you found this repo useful for your projects or papers.
Acknowledgements
- @Kamino666: added CLIP model as well as Windows and CPU support
- @ohjho: added support of 37-layer R(2+1)d favors.