fudan_mtl_reviews icon indicating copy to clipboard operation
fudan_mtl_reviews copied to clipboard

Adv Loss is not supported by the paper???

Open xljhtq opened this issue 6 years ago • 4 comments

Hi, I want to know where the adv loss is different from the domain loss?? In another word, the adv loss in the paper "Adversarial Multi-task Learning for Text Classification" has not described clearly. So i want to know what the equation is??

xljhtq avatar Jun 29 '18 14:06 xljhtq

total_loss = task_loss + adv_loss + diff_loss + l2_loss

FrankWork avatar Jun 30 '18 02:06 FrankWork

@FrankWork In your code, "total_loss = task_loss + adv_loss + diff_loss + l2_loss" , to minimize the total_loss, then the adv_loss will be decreasing. But in reality, we should let adv_loss increase in order to get the shared feature. So what I should do to maximize the adv_loss or minimize the adv_loss ???

xljhtq avatar Jul 02 '18 07:07 xljhtq

there is a function flip_gradient to maximize the adv_loss

FrankWork avatar Jul 02 '18 07:07 FrankWork

Hi, do you know equivalent function of flip_gradient in pytorch

ammaarahmad1999 avatar May 27 '21 04:05 ammaarahmad1999