ZebangCheng

Results 75 comments of ZebangCheng

> Thank you for your reply. > > When you said, "You can follow the tutorial to pre-extract the features," ([#48](https://github.com/ZebangCheng/Emotion-LLaMA/issues/48) ) did you mean that there is no tutorial...

> I will refer to issue [#32](https://github.com/ZebangCheng/Emotion-LLaMA/issues/32). > > 1. I’m trying to create my dataset in the same format as MERR for evaluation and subsequent fine-tuning. > 2. In...

Yes, that's correct. In the initial preprocessing stage, we need to retain all the extracted face frames from each video to perform feature extraction.

If your goal is emotion classification (emotion recognition), I would recommend focusing directly on training for classification. Additionally, it's a good idea to experiment with different checkpoints — such as...

Yes, you're absolutely right. If you're only fine-tuning for the recognition task, you don't need the annotation JSON file (e.g., `MERR_fine_grained.json`). Only the TXT file containing the labels (e.g., `MERR_fine_grained.txt`)...

1. The MER2023 dataset includes three tasks: MER-Multi, MER-Noise, and MER-Semi. The first two tasks utilize the sentiment value (`V`), whereas MER-Semi does not. However, to maintain a consistent file...

**A1)** Yes, the averaged features are not used during the MAE training. Only individual frame features are used for training purposes. **A2)** We did not quantitatively evaluate or record the...

You are absolutely right. The inference processes in `app_EmotionLlamaClient.py` and `eval_emotion.py` are indeed different. For simplicity, the inference code in `app_EmotionLlamaClient.py` does not use video features or local features. In...

**A1)** I'm not very familiar with performing validation during training, as I usually monitor the loss values instead. However, you can refer to the following part of the code for...

You're absolutely right — monitoring the loss on the validation set is indeed much more informative than just looking at the training loss. However, for the datasets we focus on,...