adversarial_personalized_ranking icon indicating copy to clipboard operation
adversarial_personalized_ranking copied to clipboard

A academic question about AMF: how to understant a large-step backward followed by a small-step forward to improve robustness of the model!

Open BinFuPKU opened this issue 5 years ago • 0 comments

Dear Prof. He: I am a junior PhD student from Peking University, and I has read your paper and code, but still has a problem on understanding this whole idea essentially.

In previous months, i felt into thinking using GAN in Recommendation systems, while it's hard since rating is discrete and perturbing on rating is not proper!

We know GAN is used to enhance the robustness of model by creating more fine-grained negative samples using noise. Your paper has perturbed on the parameters (latent factors) of BPR-MF after its convergence.

While the parameters of BPR are stable and almost invariable after convergence. Then, the loss of adversarial part, plus the loss of BPR, reflecting on optimization, is a small-size gradient ascent plus a large-size gradient descent (value of delta = gradient of BPR, can be seen as gradient ascent to get larger adversarial loss), which is very like a large-step backward followed by a small-step forward to adjust the value of parameters (trade-off).

So the whole idea in essence can be interpreted as adjusting gradient optimization, right? GAN is just the outer wear.

But the improvement from BPR-MF to AMF is amazing, how did it get without more noisy input or less paramters or adjusting regulizer? It is very puzzling for me to find a good explanation.

Hope for your reply!

BinFuPKU avatar Apr 19 '19 15:04 BinFuPKU