Results 37 comments of Vladimir Somers

Hi @ykding666, do you still need help for this issue? I was very busy last month, sorry for the late reply

@bismex any update on this? Thanks! :)

Hi @Ellenisawake , if you enable evaluation with standard MOT metrics, you should have the tracking results exported as .txt somewhere in the output directory. Can you have a look...

@Ellenisawake, do you mean the indexing of the frame starts at 1 and not 0? To be verified, but if I remember correctly, this is how MOTChallenge file are formatted,...

This is probably [this line](https://github.com/TrackingLaboratory/tracklab/blob/095306aa4bd89c94ead5c579b15c41870d1356f2/tracklab/datastruct/tracking_dataset.py#L218C32-L218C39) causing the issue. When you created you custom dataset, did you make the image_ids and frame count start at 0? Did you subclass [MOT class](https://github.com/TrackingLaboratory/tracklab/blob/main/tracklab/wrappers/dataset/mot_like/common.py)?...

Hi @chichengfengxue, there is no explicit support for that currently, even if some modules might support it (e.g. YOLO), the entire Tracklab pipeline cannot be run in a distributed way....

HI, can you be a little bit more precise please: what is your exact question, and where did you copy pasted this sentence? Thanks

These are weights for the entire BPBreID structure, trained on the dataset indicated in the filename. It is necessary to use human-parsing labels for training, to train the attention mechanism....

Hi @Pritigrg, the Occluded-ReID model is trained on Market, so can you try using the Market model? If it does not give the same performance, then I likely tweaked the...

Hi @vipingautam1906. The final features maps (i.e. the output HxWxD feature map from the backbone), is located [here](https://github.com/VlSomers/bpbreid/blob/82c6f2a6f1f4be34d8612e2372cafde9b03bcaef/torchreid/models/bpbreid.py#L129). Do you want to extract this feature map on your custom images,...