Universal-Domain-Adaptation
Universal-Domain-Adaptation copied to clipboard
why normalize_weight need div torch.mean(x)?
def normalize_weight(x): min_val = x.min() max_val = x.max() x = (x - min_val) / (max_val - min_val) x = x / torch.mean(x) return x.detach()
according to paper, x in (0,1), why normalize_weight need div torch.mean(x)?
ce = nn.CrossEntropyLoss(reduction='none')(predict_prob_source, label_source) And why feed after_sotmax(predict_prob_source) to nn.CrossEntropyLoss? This criterion has combined LogSoftmax and NLLLoss.