GroundingDINO icon indicating copy to clipboard operation
GroundingDINO copied to clipboard

load_model checkpoint loading harcoded to cpu instead of device parameter

Open nitinkumartech opened this issue 1 year ago • 1 comments

In the old load_model method, checkpoint map_location is hardcoded to cpu. Maybe I'm missing something but this should be dynamic with the device parameter.

def load_model(model_config_path: str, model_checkpoint_path: str, device: str = "cuda"):
    args = SLConfig.fromfile(model_config_path)
    args.device = device
    model = build_model(args)
    checkpoint = torch.load(model_checkpoint_path, map_location="cpu")
    model.load_state_dict(clean_state_dict(checkpoint["model"]), strict=False)
    model.eval()
    return model

We were not able to use GPU when inferencing.

We got it working when we changed it back to the below:

def load_model(model_config_path: str, model_checkpoint_path: str, device: str = "cuda"):
    args = SLConfig.fromfile(model_config_path)
    args.device = device
    model = build_model(args)
    checkpoint = torch.load(model_checkpoint_path, map_location=device)
    model.load_state_dict(clean_state_dict(checkpoint["model"]), strict=False)
    model.eval()
    return model

I can raise a PR if this is right,

nitinkumartech avatar Feb 08 '24 07:02 nitinkumartech

@SlongLiu What do you think about the above?

nitinkumartech avatar Feb 08 '24 07:02 nitinkumartech