GTS icon indicating copy to clipboard operation
GTS copied to clipboard

PEMS-BAY results

Open vgsatorras opened this issue 3 years ago • 1 comments

Hi,

Thank you for publishing the code. I am trying to reproduce the results for the PEMS-BAY dataset. The loss I get is larger than the one reported in the Appendix of the paper. I just pulled the repository and ran the code with the provided commands. Following I copy the log at epoch 99.

2021-09-23 02:36:07,372 - INFO - Epoch [99/200] (57000) train_mae: 10.9693, val_mae: 2.6471
2021-09-23 02:36:48,472 - INFO - Test: mae: 2.5019, mape: 0.0420, rmse: 4.2803
2021-09-23 02:36:48,473 - INFO - Horizon 15mins: mae: 1.4035, mape: 0.0296, rmse: 3.0428
2021-09-23 02:36:48,473 - INFO - Horizon 30mins: mae: 1.8508, mape: 0.0425, rmse: 4.3016
2021-09-23 02:36:48,473 - INFO - Horizon 60mins: mae: 2.3758, mape: 0.0592, rmse: 5.5099
2021-09-23 02:36:48,474 - INFO - Epoch [99/200] (57000) train_mae: 10.9693, test_mae: 2.5019, lr: 0.000005, 357.9s, 378.4s

The training loss seems too large, could it be it is diverging? Maybe an error has been introduced into the repository in one of the last updates?

Best, Victor

vgsatorras avatar Sep 23 '21 01:09 vgsatorras

Thanks for your message. I checked this situation. The performance on the PEMS-BAY do have the gap with our previous implementation. It might come from the previous updates. I will check the code, re-ture the parameters and get back to you soon. Thanks for your remind.

Update: I quickly finetuned some parameters. It seems I used a large learning rate before. When I set base_lr to 0.001, the performance became better. I guess the model is very sensitive to the parameters about learning rate: base_lr, lr_decay_ratio and steps.

chaoshangcs avatar Sep 27 '21 16:09 chaoshangcs