pytorch-YOLOv4 icon indicating copy to clipboard operation
pytorch-YOLOv4 copied to clipboard

inference problem

Open Lsmartworld opened this issue 3 years ago • 7 comments

sorry, I don't understand the way of inference.my way: (follow this: python models.py <num_classes> <IN_IMAGE_H> <IN_IMAGE_W><namefile(optional)>)

(!!!INTERRUPTED.pth is my traing result)

python models.py 80 "/home/cad429/code/lzh/my_yolo/INTERRUPTED.pth" "/home/cad429/code/lzh/my_yolo/data/dog.jpg" 576 768

but error: RuntimeError: Error(s) in loading state_dict for Yolov4:Missing key(s) in state_dict: "down1.conv1.conv.0.weight",,,,,balabala

how to solve it,please help me

Lsmartworld avatar Nov 07 '20 13:11 Lsmartworld

Hey, I found a way to properly load the model (don't ask me why it works):

model = Yolov4(yolov4conv137weight=None, n_classes=1, inference=True)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
if torch.cuda.device_count() > 1:
    model = torch.nn.DataParallel(model)
model.to(device=device)
pretrained_dict = torch.load(model_path, map_location=torch.device('cuda'))
model.load_state_dict(pretrained_dict)
model.cuda()

adrien-jacquot avatar Oct 01 '21 15:10 adrien-jacquot

did u solve this problem? i got the same error.

Hibixis avatar Oct 21 '21 03:10 Hibixis

Hey, I found a way to properly load the model (don't ask me why it works):

model = Yolov4(yolov4conv137weight=None, n_classes=1, inference=True)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
if torch.cuda.device_count() > 1:
    model = torch.nn.DataParallel(model)
model.to(device=device)
pretrained_dict = torch.load(model_path, map_location=torch.device('cuda'))
model.load_state_dict(pretrained_dict)
model.cuda()

i used ur method, but got the same error. :(

Hibixis avatar Oct 21 '21 03:10 Hibixis

i learned the similar problem in stackoverflow, it happens when the model loaded is not matched with the model used in inference

Hibixis avatar Oct 21 '21 03:10 Hibixis

Here's a slightly different version:

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

def load_model(model, pretrained_dict, parallelize):
    if parallelize:
        model = torch.nn.DataParallel(model)
    model.to(device=device)
    model.load_state_dict(pretrained_dict)
    return model


def init(model_path):
    model = Yolov4(yolov4conv137weight=None,
                   n_classes=1, inference=True)
    pretrained_dict = torch.load(model_path, map_location=device)
    try:
        model = load_model(model, pretrained_dict, True)
    except RuntimeError:
        logging.error(
            "Error while loading the model with parallelization. Trying to load without")
        model = load_model(model, pretrained_dict, False)

    if device.type == 'cuda':
        model.cuda()
    model.eval()
    logging.info("Init complete")
    return model

adrien-jacquot avatar Oct 25 '21 07:10 adrien-jacquot

Hi @adrien-jacquot

Both your options are not working for me.. Can't you help further?

Best regards, Mariia

mariiak2021 avatar Oct 27 '21 12:10 mariiak2021

Not really, sorry. What is the error you're getting?

adrien-jacquot avatar Oct 27 '21 14:10 adrien-jacquot