Quasi-Attention-ABSA icon indicating copy to clipboard operation
Quasi-Attention-ABSA copied to clipboard

代码结果不好

Open geekbeing opened this issue 3 years ago • 3 comments

您好,我复现了您的代码,为什么和论文中的结果差距很大,甚至说模型就是无效的,我没有改动您的代码,直接运行的 以下是 CGBERT模型epoch = 29的结果

`05/07/2021 01:46:20 - INFO - util.train_helper - epoch = 29████████████████████████████▌| 312/314 [00:05<00:00, 48.88it/s] 01:46:20 - INFO - util.train_helper - global_step = 18750

05/07/2021 01:46:20 - INFO - util.train_helper - loss = 1.0986090652494622

05/07/2021 01:46:20 - INFO - util.train_helper - test_loss = 1.0970154359082507

05/07/2021 01:46:20 - INFO - util.train_helper - test_accuracy = 0.8382118147951038

05/07/2021 01:46:20 - INFO - util.train_helper - aspect_strict_Acc = 0.47897817988291647

05/07/2021 01:46:20 - INFO - util.train_helper - aspect_Macro_F1 = 0

05/07/2021 01:46:20 - INFO - util.train_helper - aspect_Macro_AUC = 0.47944606445295795

05/07/2021 01:46:20 - INFO - util.train_helper - sentiment_Acc = 0.6661184210526315

05/07/2021 01:46:20

  • INFO - util.train_helper - sentiment_Macro_AUC = 0.48320620795185415`

以下是 QACGBERT模型epoch = 24的结果,以及最后的结果 05/08/2021 00:45:03 - INFO - util.train_helper - ***** Evaluation Interval Hit *****8<00:24, 5.05it/s, train_loss=1.35] Iteration: 100%|████████████████████████████████████████████████████████████████████████| 167/167 [00:03<00:00, 49.49it/s] 05/08/2021 00:45:07 - INFO - util.train_helper - ***** Evaluation results ***** 05/08/2021 00:45:07 - INFO - util.train_helper - epoch = 24████████████████████████▌| 166/167 [00:03<00:00, 42.66it/s]

05/08/2021 00:45:07 - INFO - util.train_helper - global_step = 15750

05/08/2021 00:45:07 - INFO - util.train_helper - loss = 1.4141508170202666

05/08/2021 00:45:07 - INFO - util.train_helper - test_loss = 1.4643629932118034

05/08/2021 00:45:07 - INFO - util.train_helper - test_accuracy = 0.40375

05/08/2021 00:45:07 - INFO - util.train_helper - aspect_P = 0.3350454365863295

05/08/2021 00:45:07 - INFO - util.train_helper - aspect_R = 0.8273170731707317

05/08/2021 00:45:07 - INFO - util.train_helper - aspect_F = 0.47694038245219356

05/08/2021 00:45:07 - INFO - util.train_helper - sentiment_Acc_4_classes = 0.36390243902439023

05/08/2021 00:45:07 - INFO - util.train_helper - sentiment_Acc_3_classes = 0.5220966084275437

05/08/2021 00:45:07 - INFO - util.train_helper - sentiment_Acc_2_classes = 0.6234357224118316

Iteration: 100%|███████████████████████████████████████████████████████| 635/635 [02:29<00:00, 4.25it/s, train_loss=2.04] Epoch: 100%|███████████████████████████████████████████████████████████████████████████| 25/25 [1:01:38<00:00, 147.94s/it] 05/08/2021 00:45:34 - INFO - util.train_helper - ***** Global best performance ***** 05/08/2021 00:45:34 - INFO - util.train_helper - accuracy on dev set: 0.4942233632862644

geekbeing avatar May 07 '21 15:05 geekbeing

Hi,

Thanks for your feedback. Could you provide your running command so I can further root cause the issue?

Thanks.

frankaging avatar May 08 '21 19:05 frankaging

i just run the run.sh file following

# example running command
CUDA_VISIBLE_DEVICES=0 python run_classifier.py \
--task_name semeval_NLI_M \
--data_dir ../datasets/semeval2014/ \
--output_dir ../results/semeval2014/QACGBERT-2/ \
--model_type QACGBERT \
--do_lower_case \
--max_seq_length 128 \
--train_batch_size 24 \
--eval_batch_size 24 \
--learning_rate 2e-5 \
--num_train_epochs 30 \
--vocab_file ../models/BERT-Google/vocab.txt \
--bert_config_file ../models/BERT-Google/bert_config.json \
--init_checkpoint ../models/BERT-Google/pytorch_model.bin \
--seed 123 \
--evaluate_interval 250 \
--context_standalone

geekbeing avatar May 09 '21 01:05 geekbeing

Thanks! I am sorry that I did not keep this repo updated in the first place. Here is the reason why you experience this catastrophic failure.

I updated this repo for other projects by studying different training rates scheduling for different layers, which is not a topic for this paper. And that causes some issues (it seems like from your runs!) If you look at my recent push, I commented out those lines for the PR opened by you: https://github.com/frankaging/Quasi-Attention-ABSA/commit/770d810a9b509857f8f898b49c3167a361645373 Without this change, it seems like I was trying out some really high learning rate for some linear layers, and that failed the training.

You can do the following things to remediate the catastrophic failure: (1) pull. (2) rerun with the updated commands as well.

Since I was working on this repo for other projects, I might forget to remove codes here and there. When I have time, I will update it all at once. Thanks again for your findings! It matters! If you still experience this catastrophic failure, please let me know. If not, please kindly close this issue.

Thanks, Zen

frankaging avatar May 09 '21 07:05 frankaging