GroundingDINO
GroundingDINO copied to clipboard
Error: torch not compiled with CUDA
I was following Roboflow notebook and got below error while predicting with the Grounding DINO model at inference.py
AssertionError: Torch not compiled with CUDA enabled
Looks like Grounding DINO is hardcoded to be inferenced with cuda
so it is not able to do inference on the cpu
CONFIG_PATH = os.path.join(HOME, "GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py")
WEIGHTS_NAME = "groundingdino_swint_ogc.pth"
WEIGHTS_PATH = os.path.join(HOME, "weights", WEIGHTS_NAME)
from groundingdino.util.inference import load_model, load_image, predict, annotate
model = load_model(CONFIG_PATH, WEIGHTS_PATH)
boxes, logits, phrases = predict(
model=model,
image=image,
caption=TEXT_PROMPT,
box_threshold=BOX_TRESHOLD,
text_threshold=TEXT_TRESHOLD
)
Please suggest
Hi, Did you find any solution to this problem?
@csv610 Theres a flag in the Model class that allows you to choose your device. Just set it to 'cpu' for ex: grounding_dino_model = Model(model_config_path=GROUNDING_DINO_CONFIG_PATH, model_checkpoint_path=GROUNDING_DINO_CHECKPOINT_PATH, device='cpu')
Hello, There are two methods "build_sam" and "build_model); One of them as given on some link is as follow from segment_anything import build_sam def load_model(model_config_path, model_checkpoint_path, cpu_only=False): args = SLConfig.fromfile(model_config_path) args.device = "cuda" if not cpu_only else "cpu" model = build_model(args).to("cpu") checkpoint = torch.load(model_checkpoint_path, map_location="cpu") load_res = model.load_state_dict(clean_state_dict(checkpoint["model"]), strict=False) print(load_res) _ = model.eval() return model
I had specified to(cpu) on the model but it did not work.
Thanks
On Wednesday, May 31, 2023 at 12:02:03 PM PDT, bdubbs-clarifai ***@***.***> wrote:
@csv610 Theres a flag in the Model class that allows you to choose your device. Just set it to 'cpu' for ex: grounding_dino_model = Model(model_config_path=GROUNDING_DINO_CONFIG_PATH, model_checkpoint_path=GROUNDING_DINO_CHECKPOINT_PATH, device='cpu')
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
appalled to see this problem still not solved yet... Any plan on this?@SlongLiu
appalled to see this problem still not solved yet... Any plan on this?@SlongLiu
The answer is what @bdubbs-clarifai reported a year ago.
More in detail, the Model
class accepts a device
argument, which defaults to cuda
. Setting it to cpu
works as expected.