MSF
MSF copied to clipboard
how to run the eval part of the program
There are two Eval programs in your project. How can I debug them
I didn't understand what you mean by debug them, but here is the command to run eval_linear.py:
python eval_linear.py \
-j 16 \
-b 256 \
--arch resnet50 \
--weights <path to the checkpoint> \
--save <path to the directory where experiment output will be saved> \
<path to the imagenet dataset root>
Command to run eval_knn.py:
python eval_knn.py \
-j 16 \
-b 256 \
--arch resnet50 \
--weights <path to the checkpoint> \
--save <path to the directory where experiment output will be saved> \
<path to the imagenet dataset root>
These are example command for running. To change the command line parameters from their default value you can run above commands with the --help
.
Thank you. Then I find a promeble in run python eval_knn.py and python eval_linear.py, the eval_linear.py is very low, but the python eval_knn.py result can get 0.95. this is why? I run in my data (10 class)
Hi @18456432930,
Sorry, I didn't understand the problem. Can you be more specific? For instance, could you reproduce the numbers in our paper with our models? What augmentation does your model use? Some tensorflow models do not require input normalization but our code uses input normalization. Did you mean 0.95% or 95% with eval_knn.py
? We use the standard PyTorch model definitions so can you confirm if the model definition used for the checkpoint is compatible?
Thank You Ajinkya Tejankar
First my mean is 95%, then I is only change the code input(change imagenet to my data) and begin to run. Finally i can get a result(run eval_linear.py can get 30% acc, run eval_knn.py can get 95% acc).No changes have been made elsewhere in your code.I'm very sorry about my English expression.
I see. Our linear evaluation code uses a trick of normalizing the features by subtracting mean and std over the entire dataset. The mean and std are calculated on the l2 normalized features. This could be a problem for your model. I am not sure though. Perhaps, you can try the linear evaluation code from the official MoCo repository.
Don't worry about the English :)