About iemocap_feature.pkl
Thank you for your work !!
I want to train my dataset, what data format is included in the iemocap_features.pkl file of the dataset ? Is there a sample script file for making iemocap_features.pkl or MELD_features_raw1.pkl ?
Thank you~
Thank you for your work !!
I want to train my dataset, what data format is included in the iemocap_features.pkl file of the dataset ? Is there a sample script file for making iemocap_features.pkl or MELD_features_raw1.pkl ?
Thank you~
You can find the way to load the features in the dataloader.py file, including the modality features and the dialogue/speaker information. The multimodal features used in our paper are from the MMGCN work (where the text features are FastText+CNN), but unfortunately, the original authors did not open-source the code for the feature extraction part.
In this work, the multimodal feature extraction and the subsequent context modeling (dialogue/modality) are decoupled into two stages. Therefore, we suggest that you directly adopt the feature extractor you consider more suitable (such as using RoBERTa as the text feature extractor) for the new dataset.
While the two-stage training approach is more friendly to the dataset used in the paper (which facilitates the comparison of different methods for dialogue/modality context modeling, and usually achieves better results), we recommend adopting an end-to-end training approach to obtain the relevant baselines for new datasets (the training is simpler, and the results are usually very close to the two-stage approach).
Thank you for your attention. If you have any questions during the use, please feel free to communicate with us via email at [email protected].