xljhtq

Results 3 comments of xljhtq

@FrankWork In your code, "total_loss = task_loss + adv_loss + diff_loss + l2_loss" , to minimize the total_loss, then the adv_loss will be decreasing. But in reality, we should let...

@FrankWork when I am training the adversarial network by rebuilding network with TF , the domain_loss is decreasing when the loss_adv is increasing. But due to the discrimanor of loss_adv,...

@liyibo Hi, There is another problem I want to know. Because the paper says that ''pre-activation refers to activation being done before weighting instead of after as is typically done...