Fine tuned model
Hi Thomas, Great work!
I have a very fundamental question.
I understand that your idea behind the presentation and sharing codes is to spread knowledge, but do you think that instead of simply using the standard resnet model to get the features, one should first fine-tune the model on a specific dataset and then extract features?
For instance, If I intend to find features for fashion data, then I should first finetune the standard resnet model by unfreezing the initial layers and then extract features for new fashion images? Do you think this would improve accuracy in the search?
Thanks
Again, Great work!
Thanks @techietrader, fine-tuning your network on a classification task on your own dataset is a good idea. Two things to keep in mind:
- Try to create a gold cassette of manually sorted examples of query -> results in order to evaluate the quality of the KNN results you get from different model.
- It's not always a given that such fine-tuning will improve the quality of the search, it heavily depends on whether your labels aligns with the objective of your search retrieval strategy.
You can have a look at this tutorial on fine-tuning with MXNet here:
- https://gluon-cv.mxnet.io/build/examples_classification/transfer_learning_minc.html
- https://mxnet.incubator.apache.org/versions/master/tutorials/gluon/gluon_from_experiment_to_deployment.html
Thanks, Thomas,
Links were helpful!
One more quick question, more related to MXNet-
Since we intend to extract the features of an image and that's why we are using this following line of code -
net = vision.resnet18_v2(pretrained=True, ctx=ctx).features
But how do I figure out which layer is used? And suppose I wish to try a different layer, let's say a Fully Connected layer or a Pooling layer. Is it possible?
Thanks, BR
@techietrader, sorry I missed that question:
yes you can easily pick which layer you want. Try to do:
print(net)
you should see every layer of the network then you can pick the one that you want like this:
subnet = net.features[:10]