Cross-Attentional-AV-Fusion
Cross-Attentional-AV-Fusion copied to clipboard
FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition
Cross Attentional Audio-Visual Fusion for Dimensional Emotion Recognition
Code for our paper "Cross Attentional Audio-Visual Fusion for Dimensional Emotion Recognition" accepted to IEEE FG 2021. Our paper can be found here.
Citation
If you find this code useful for your research, please cite our paper.
@INPROCEEDINGS{9667055,
author={Praveen, R. Gnana and Granger, Eric and Cardinal, Patrick},
booktitle={2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021)},
title={Cross Attentional Audio-Visual Fusion for Dimensional Emotion Recognition},
year={2021},
}
This code uses the RECOLA dataset to validate the proposed approach for Dimensional Emotion Recognition. There are three major blocks in this repository to reproduce the results of our paper. This code uses Mixed Precision Training (torch.cuda.amp). The dependencies and packages required to reproduce the environment of this repository can be found in the environment.yml
file.
Creating the environment
Create an environment using the environment.yml
file
conda env create -f environment.yml
Models
The pre-trained models of audio backbones are obtained here
The pre-trained models of visual backbones are obtained here
The fusion models trained using our fusion approach can be found here
audiomodel.t7: Visual model trained using RECOLA dataset
visualmodel.t7: Audio model trained using RECOLA dataset
cam_model.pt: Fusion model trained using our approach on the RECOLA dataset
Table of contents
-
Preprocessing
- Step One: Download the dataset
- Step Two: Preprocess the visual modality
- Step Three: Preprocess the audio modality
- Step Four: Preprocess the annotations
-
Training
- Training the fusion model
-
Inference
- Generating the results
Preprocessing
Return to Table of Content
Step One: Download the dataset
Return to Table of Content Please download the following.
- The dataset can be downloaded here
Step Two: Preprocess the visual modality
Return to Table of Content
- You may choose to use OpenFace toolkit to extract the cropped-aligned images.
Step Three: Preprocess the audio modality
Return to Table of Content
- The audio files are extracted and segmented to generate the corresponding audio files in alignment with the visual files using mkvextract.
Step Four: Preprocess the annotations
Return to Table of Content
- The annotations provided by the dataset organizers are preprocessed to obtain the labels of aligned audio and visual files.
Training
Return to Table of Content
- After obtaining the preprocessed audio and visual files along with annotations, we can train the model using the proposed fusion approach using the main.py script.
Inference
Return to Table of Content
- The results of the proposed model can be reproduced using the trained model.