SOLIDER-REID icon indicating copy to clipboard operation
SOLIDER-REID copied to clipboard

训练好的模型

Open ForeUP opened this issue 2 years ago • 12 comments

您好,请问方便分享一些训练好的模型吗?我想先测试一下效果,非常感谢!

ForeUP avatar Apr 06 '23 13:04 ForeUP

您好,如果您是需要预训练模型,可以在SOLIDER上下载,用SOLIDER-REID的训练命令直接训练即可得到finetune后的ReID模型。如果您希望直接使用训练好的ReID模型,我们争取近期更新一版,把模型放上去。

cwhgn avatar Apr 07 '23 02:04 cwhgn

好的,非常感谢!

ForeUP avatar Apr 07 '23 06:04 ForeUP

您好,训练好的ReID模型链接我们已经更新到Readme中了,欢迎试用。

cwhgn avatar Apr 11 '23 03:04 cwhgn

用训练好的模型提取特征,相同的人和不同的人相似度都很高

deep-practice avatar Apr 20 '23 04:04 deep-practice

可以用runtest.sh确认下效果。

cwhgn avatar Apr 20 '23 06:04 cwhgn

`import torch import torch.nn as nn import torchvision.transforms as T import cv2 from config import cfg from model import make_model import torch.nn.functional as F import numpy as np from PIL import Image, ImageFile import os.path as osp

def read_image(img_path): got_img = False if not osp.exists(img_path): raise IOError("{} does not exist".format(img_path)) while not got_img: try: img = Image.open(img_path).convert('RGB') got_img = True except IOError: print("IOError incurred when reading '{}'. Will redo. Don't worry. Just chill.".format(img_path)) pass return img

@torch.no_grad() def get_feature(img,model,device,normalize=False): input = val_transforms(img).unsqueeze(0) input = input.to(device) output, _ = model(input) if normalize: output = F.normalize(output) return output

val_transforms = T.Compose([ T.Resize(cfg.INPUT.SIZE_TEST), T.ToTensor(), T.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) ]) config_file = "configs/msmt17/swin_small.yml" cfg.merge_from_file(config_file) cfg.freeze() model = make_model(cfg, num_class=1000, camera_num=0, view_num = 0, semantic_weight = 0.2) model.load_param("weights/swin_small_msmt17.pth") device = "cuda"

if device: if torch.cuda.device_count() > 1: print('Using {} GPUs for inference'.format(torch.cuda.device_count())) model = nn.DataParallel(model) model.to(device)

model.eval()

img1 = read_image("test_imgs/p5.jpg") img2 = read_image("test_imgs/p7.jpg") feature1 = get_feature(img1,model,device,normalize=True) feature2 = get_feature(img2,model,device,normalize=True) feature1,feature2 = F.normalize(feature1),F.normalize(feature2) sim = torch.mm(feature1,feature2.t()) print(sim)

@cwhgn 这是我参考test.py写的代码,能帮忙看看问题在哪儿吗?图片之前相似度高达0.9+

`

deep-practice avatar Apr 23 '23 13:04 deep-practice

模型用的官方提供的swin_small_msmt17.pth

deep-practice avatar Apr 23 '23 13:04 deep-practice

目前看我暂时也没找到您代码的问题。几点建议哈:1)可以先check下swin_small_msmt17.pth载入是否成功;2)分步check下图片特征和提供的test.py的输出是否一致;3)如果和test.py一致,则有没有可能两者相似度就是很高。

cwhgn avatar Apr 24 '23 06:04 cwhgn

1)模型载入成功 2)对比了test.py提取的特征和我demo里面提取的特征,数值是一样的 3)使用欧式距离,相同图片和不同图片能区分开,但换成余弦相似度,任何图片的相似度都在0.9+

deep-practice avatar Apr 25 '23 02:04 deep-practice

针对3,我理解是因为训练时采用的是欧式距离,见https://github.com/tinyvision/SOLIDER-REID/blob/c35c07671ceb16f72bb7c8fd94929ef436b9406b/loss/triplet_loss.py#L124

如果你希望余弦距离可分的话,可以尝试用余弦距离进行训练。

cwhgn avatar Apr 25 '23 03:04 cwhgn

明白了,谢谢

deep-practice avatar Apr 25 '23 05:04 deep-practice

@deep-practice

A silly newbee question.

this p5 and p7 image is person cropped image. What should be the dimesion of the image ?

is 384 x 192 ?

Is there any newbee document to make inference and compare distance of two detected person image bow.

Detected person could be from yolo or from solider

Best

MyraBaba avatar Jul 05 '23 16:07 MyraBaba