PyTorch-Deep-CORAL
PyTorch-Deep-CORAL copied to clipboard
Your method of computing accuracies seems not correct
First of all, thanks for sharing code.
I haven't totally understood your way of computing accuracies in evaluate()
in train.py. It seems you are using some kind of moving average of accuracies of test batches. I believe thats not correct. It should be computed as num_correct/num_total. Here's what I think, in detail:
# evaluating
correct = 0
total = 0
with torch.no_grad():
for target_data, target_label in loader:
target_data = Variable(target_data.to(device=args.device))
target_label = Variable(target_label.to(device=args.device))
out = model(target_data)
predicted = out.argmax(1)
correct += torch.sum(predicted==target_label).item()
total += target_data.size(0)
accuracy = correct / total
I compared this with your way of computing accuracies, it turns out that your way always computes higher accuracies. Could you please think about it?
I tried to implement deep CORAL by myself and only reach ~0.55 on A->W. Also, another github project got no more than that. That's far worse than the original paper (they got 66.4%). TAT