FairMOT
FairMOT copied to clipboard
about Trainning on the other backbone
@ifzhang i read your paper and i decide to train the model on my own backbone. refer to your paper you just train 10 epochs and you can get prettey results on mot15 dataset and when you compare different backbone , you get 40 mota on tranning mot15 split dataset . so i guess a good pertrain model is important. can you share more detail about tranning without pretrain model ?
i get 70 mota on my own backbone and did not use pretrain model
@tangsipeng Hi。When I do not use the pre-trained model to train my other networks, the batchsize uses 12, so how should the learning rate or other parameters change? At present, I use the default configuration of the code and found that the overall loss is higher. |loss 11.0362 |hm_loss 0.6886 |wh_loss 1.4095 |off_loss 0.2094 |id_loss 6.4266
i get 70 mota on my own backbone and did not use pretrain model
How is your backbone? Can you share a little of what you have achieved please?
i get 70 mota on my own backbone and did not use pretrain model
How is your backbone? Can you share a little of what you have achieved please?
i use darknet backbone , and do not use deformat conv .just upsample 3 outputs to same size, then concat three outputs to one as required.
i get 70 mota on my own backbone and did not use pretrain model
How is your backbone? Can you share a little of what you have achieved please?
i use darknet backbone , and do not use deformat conv .just upsample 3 outputs to same size, then concat three outputs to one as required.
我想请问下您具体是怎么做的呢,可以找到您的联系方式吗
i get 70 mota on my own backbone and did not use pretrain model
How is your backbone? Can you share a little of what you have achieved please?
i use darknet backbone , and do not use deformat conv .just upsample 3 outputs to same size, then concat three outputs to one as required.
我想请问下您具体是怎么做的呢,可以找到您的联系方式吗
我已经2年没搞这个了,目前主要搞 bytetrack 的一些应用