FairMOT
FairMOT copied to clipboard
About finetune on MOT dataset
Hi, i really appreciate this awesome work.
But i got some question here. I read all the issues, and saw that you finetune your model after training on the MOT model zoo dataset. i hope that u could tell the parameters' setting when finetuning on the MOT15~20, like learning rate, learing step, etc.
Best regards!
Only MOT15 and MOT20 need to be finetuned. You can find the traininig file here: https://github.com/ifzhang/FairMOT/blob/master/experiments/ft_mot15_dla34.sh
Thanks for reply! Another question is that i saw your update about the new version which is pre-trained on crowdhuman dataset.
But when i using the script "crowdhuman_dla34.sh" , it causes some error like "IndexError: index 500 is out of bounds for axis 0 with size 500"
Should i change the '--K' from 500 to a bigger number?
Another request is that may i see your log file training on crowdhuman dataset?
hi @james128333, How did you solve index errors? I got into this problem for many days. Thank you
@quacnhat just modify the K params in opts, for example to set 1000
Hello, I want to ask a question about finetuning. Are the backbone and heads being trained (updated, backpropagated) during the finetuning or just the heads being updated? Is it flexible to edit according to our needs?
Thank you.
The finetuning is flexible and it depend on which part you want to train. Just fix the network you donnot want to update.