pytorch_DANN icon indicating copy to clipboard operation
pytorch_DANN copied to clipboard

Loss function

Open enaserianhanzaei opened this issue 6 years ago • 4 comments

Hi,

I'm not sure if this is an issue, or that's my misunderstanding. You calculated loss as:

loss = class_loss + params.theta * domain_loss

shouldn't it be:

loss = class_loss - params.theta * domain_loss

according to the formula (9) of the paper?

enaserianhanzaei avatar Sep 02 '19 14:09 enaserianhanzaei

Hi,

I'm not sure if this is an issue, or that's my misunderstanding. You calculated loss as:

loss = class_loss + params.theta * domain_loss

shouldn't it be:

loss = class_loss - params.theta * domain_loss

according to the formula (9) of the paper?

Hi. I'm trying to understand the implementation and have the same question with you. Would you mind letting me know if you have known the answer? Much appreciated if I could hear from you.

vincent341 avatar Aug 15 '21 07:08 vincent341

Hi, I'm not sure if this is an issue, or that's my misunderstanding. You calculated loss as: loss = class_loss + params.theta * domain_loss shouldn't it be: loss = class_loss - params.theta * domain_loss according to the formula (9) of the paper?

Hi. I'm trying to understand the implementation and have the same question with you. Would you mind letting me know if you have known the answer? Much appreciated if I could hear from you.

Maybe you can refer to models.py, there is a GradReverse Layer (GRL) to reverse gradients for layers before GRL.

CuthbertCai avatar Aug 15 '21 08:08 CuthbertCai

Hi, I'm not sure if this is an issue, or that's my misunderstanding. You calculated loss as: loss = class_loss + params.theta * domain_loss shouldn't it be: loss = class_loss - params.theta * domain_loss according to the formula (9) of the paper?

Hi. I'm trying to understand the implementation and have the same question with you. Would you mind letting me know if you have known the answer? Much appreciated if I could hear from you.

Maybe you can refer to models.py, there is a GradReverse Layer (GRL) to reverse gradients for layers before GRL.

Thanks so much for the important hint. I understand now. Is removing grad_versrse layer and update the current loss function to loss = class_loss - params.theta * domain_loss equivalent to the current implementation?

vincent341 avatar Aug 15 '21 08:08 vincent341

Hi, I'm not sure if this is an issue, or that's my misunderstanding. You calculated loss as: loss = class_loss + params.theta * domain_loss shouldn't it be: loss = class_loss - params.theta * domain_loss according to the formula (9) of the paper?

Hi. I'm trying to understand the implementation and have the same question with you. Would you mind letting me know if you have known the answer? Much appreciated if I could hear from you.

Maybe you can refer to models.py, there is a GradReverse Layer (GRL) to reverse gradients for layers before GRL.

Thanks so much for the important hint. I understand now. Is removing grad_versrse layer and update the current loss function to loss = class_loss - params.theta * domain_loss equivalent to the current implementation?

I think the two implementations are not equivalent. Because loss function loss=class_loss-params.theta*domain_loss will reverse gradients of extractor and domain_classifier while GRL only reverses gradients of extractor.

CuthbertCai avatar Aug 16 '21 10:08 CuthbertCai