James Choi
James Choi
In general VGG model does not care how image big or small. Why? they does do "stride". So you just stride your image into smaller batch size. It does not...
Just remove broken codes. basenet/vgg16_bn.py ``` from torchvision import models [-] from torchvision.models.vgg import model_urls < Remove Ln. 7 ... class vgg16_bn(torch.nn.Module): def __init__(self, pretrained=True, freeze=True): super(vgg16_bn, self).__init__() [-] model_urls['vgg16_bn']...
This model just inference **Character-Region** Not a OCR. Try [**TrOCR**](https://huggingface.co/docs/transformers/main/en/model_doc/trocr) for cropped image of CRAFT.