SOLIDER-REID icon indicating copy to clipboard operation
SOLIDER-REID copied to clipboard

Is it possible to speed up inference and compare features ?

Open MyraBaba opened this issue 1 year ago • 1 comments

Hi

I have 2080RTX Ti and the below inference ~tooks 0,019 seconds after warmup . ~ 40 - 50 inference per second. Its look slow. How I can make it faster ? Is it due to model size ?

Also

@torch.no_grad()
def get_feature(img, model, device, normalize=False):
    input = val_transforms(img).unsqueeze(0)
    input = input.to(device)
                                                                                                       29,1          Top
    input = val_transforms(img).unsqueeze(0)
    input = input.to(device)
    output, _ = model(input)
    if normalize:
        output = F.normalize(output)
    return output

if device:
    if torch.cuda.device_count() > 1:
        print('Using {} GPUs for inference'.format(torch.cuda.device_count()))
    model = nn.DataParallel(model)
    model.to(device)

model.eval()

 elapsed_time = next(timer_gen)

    feature1 = get_feature(img1, model, device, normalize=True)
    elapsed_time = next(timer_gen) - elapsed_time
    print(elapsed_time)

MyraBaba avatar Jul 06 '23 10:07 MyraBaba