Questions about training details.
Hello, thanks for your impressive work. I am trying to reproduce the results of source only, AdaptSeg, and proposed method on C-Driving benchmarks. I checked the appendices (C.2.Training details), but there are some points unclear to me. I’d really appreciate your kind reply.
-
Which initial weights did you use for training each methods? Random initialization or vgg16_bn provided by torchvision or any other?
-
Did you use similar training process on C-Driving benchmarks as in the OCDA of classification tasks? Specifically, is the overall process as follows? (1) Train source net (2) Compute class centroids from trained source net (3) Fine-tune the model, which is initialized from source model(1), with fixed centroids and curriculum learning.
-
When you construct visual memory, did you average all the features belonging to the same category at once or firstly average the features of same category in each image?
@seyeon956 Thanks for your interest in our work. Here are my answers:
- We use vgg16_bn provided by torchvision.
- Yes. The overall process is as what you said.
- We average all the features belonging to the same category at once.