deep-person-reid
deep-person-reid copied to clipboard
Feature distances not discriminatory
Hi,
I am using torchreid as a feature extractor for my custom dataset. When I find euclidean or cosine distances between probe and gallery images, the distances are not very well separated between matching and non-matching candidates. For example, if the probe image is a person in a red dress, then the right match will have a distance of 0.1 whereas someone in a yellow dress will have a distance of 0.2. This does not seem like a big enough separation considering how different the subjects look.
Is this expected behavior for out-of-distribution data? Do I have no other choice apart from retraining the model on my custom dataset?
I am using OSNet model for feature extraction.
i have the same issue , the embeddings are entangled
More than checking case by case, you should probably do an overall accuracy test. I've tried few different reid models: DeepSort, ResNet50 and OSNet. With all of them, a similarity threshold of 0.1 or 0.05 for example would change my overall accuracy a lot.
hey guys did u find any solution?.. I too am facing this problem
This did not work out for me. Tried a DualNorm based implementation instead. That gave me very good results for my use case.
@mishravishnu can u please provide the link for any repo u used?
https://github.com/BJTUJia/person_reID_DualNorm
@mishravishnu do u mind sharing ur code (only reID part) and what threshold you are using..thanks