understand_videobased_reid
understand_videobased_reid copied to clipboard
Need assistance in understanding methodology to train video reid datasets.
Hi I am trying to understand different methodologies and architecture to train video reid datasets. My question is that can you please tell me what are the best hyperparameters combination, i.e. learning rate, margin for triplet loss, batch size train, batch size test, sequence length for video reid dataset training etc.
Also will you please tell me that in one of your questions asked in the TCLNet repository, you asked that while training for the mars dataset, your result was not similar to those in the research paper. The author said that try to use all frames while testing instead of 4 (the default value) will you please tell me what does it actually meant by this? Which value should be replaced instead of 4 in test_frames of argument parser?
I have also attached a screenshot for the reference