MMER
MMER copied to clipboard
Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition
I made the changes that I wrote in the issue #8. I didn't tried the code because I don't have the dataset yet but it should works.
The README doesn't specify wich version of python is used by the project and the requirements.txt contains a dev version of the pytorch library. Maybe an installation with Poetry could...
@Sreyan88 Please consider bug fixes: run_iemocap.py: 1.1. add from mmi_module import MMI_Model 1.2. L545 : run(**args**, config, train_data, valid_data, str(i)) 1.3. L436 : learning_rate = args.learning_rate 2. In README all...
Hi, I'm trying to implement your paper. At this point, I'm trying to use textual information, in order to extract Bert's features. For that reason, I tried executing train_and_validate.py using...
Dear author please guide how to use this model for reproducing the results? I have cloned the repo in my device. Now what steps need to follow for training in...
# Patching CVE-2007-4559 Hi, we are security researchers from the Advanced Research Center at [Trellix](https://www.trellix.com). We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a...