barcode_detection_benchmark
barcode_detection_benchmark copied to clipboard
How to run inference only
What would be the simplest steps to follow when trying to run inference only - having an input image and wanting to get detected barcode locations out? I'd like to test what I am looking at with my own images before diving all in with all the stuff that is unfamiliar to me (like catalist). I have downloaded the checkpoints - now what? Thanks.
Hi, you can just load segmentation model from checkpoint and apply on your images
model.load_state_dict(torch.load(ckpt_path)['model_state_dict'])
Images should be 3-channel rgb, normalized with imagenet standart normalization mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)
heatmap = model(batch_images)[:, 0] # zero channel - barcode/no-barcode logits
heatmap = torch.sigmoid(heatmap) # 0...1 normalized (>.5 is considered barcode)
So you can get segmentation map / heatmap for "barcodeness" easily. You can then use opencv findContours & minAreaRect for further postprocessing to get actual rectangles if you need
I use this code to inference:
model = models.ZharkovDilatedNet() model.load_state_dict(torch.load(ckpt_path)['model_state_dict']) heatmap = model(test_dataset)[:, 0] heatmap = torch.sigmoid(heatmap)
But the download checkpoints is mismatch with the model. How to fix it? Thanks.
size mismatch for net.0.0.weight: copying a param with shape torch.Size([24, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([24, 1, 3, 3]). size mismatch for net.9.weight: copying a param with shape torch.Size([8, 24, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 24, 1, 1]). size mismatch for net.9.bias: copying a param with shape torch.Size([8]) from checkpoint, the shape in current model is torch.Size([1]).
@rootzzp were you able to run the inference? i am able to load the model successfully but not able to get the contours from image.