DeepICF icon indicating copy to clipboard operation
DeepICF copied to clipboard

Test Loss behaviour

Open rishj97 opened this issue 5 years ago • 1 comments

When training, the test loss does not decrease (as it should), but the Hit Ratio keeps increasing. Doesn't this suggest that the loss function being used is incorrect? For example here when training DeepICF (with attention, same config params as the README) -

Epoch 0 [153.6s + 600.3s]: HR = 0.5296, NDCG = 0.2971, loss = 0.2525 [54.1s] train_loss = 0.3235 [184.5s]
Epoch 1 [173.1s + 594.7s]: HR = 0.5957, NDCG = 0.3382, loss = 0.2550 [49.2s] train_loss = 0.2908 [152.5s]
Epoch 2 [163.4s + 541.1s]: HR = 0.6270, NDCG = 0.3589, loss = 0.2763 [48.7s] train_loss = 0.2734 [168.5s]
Epoch 3 [165.3s + 540.0s]: HR = 0.6444, NDCG = 0.3757, loss = 0.2468 [49.2s] train_loss = 0.2640 [169.9s]
Epoch 4 [167.5s + 522.3s]: HR = 0.6526, NDCG = 0.3818, loss = 0.2376 [49.0s] train_loss = 0.2592 [162.8s]
Epoch 5 [166.1s + 528.3s]: HR = 0.6588, NDCG = 0.3868, loss = 0.2655 [42.5s] train_loss = 0.2539 [149.1s]
Epoch 6 [163.5s + 491.9s]: HR = 0.6596, NDCG = 0.3893, loss = 0.2553 [44.8s] train_loss = 0.2512 [151.7s]
Epoch 7 [164.6s + 456.7s]: HR = 0.6666, NDCG = 0.3938, loss = 0.2512 [36.5s] train_loss = 0.2488 [125.5s]

And so on... Do you know what might be causing this or if there is a way to make the test loss decrease consistently as well (like the training loss)?

rishj97 avatar May 12 '19 09:05 rishj97

same question @linzh92

Honolu avatar Mar 09 '20 15:03 Honolu