unhandled: could not import node
# Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
# Also available under a BSD-style license. See LICENSE.
import sys
from PIL import Image
import requests
import torch
import torchvision
# import torchvision.models as models
import torchvision.models.detection as models
from torchvision import transforms
import torch_mlir
from torch_mlir_e2e_test.linalg_on_tensors_backends import refbackend
from torchvision.models.detection.anchor_utils import AnchorGenerator
def load_and_preprocess_image(url: str):
headers = {
'User-Agent':
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'
}
url = "./YellowLabradorLooking_new.jpeg"
#img = Image.open(requests.get(url, headers=headers,
# stream=True, verify=False).raw).convert("RGB")
img = Image.open(url)
# preprocessing pipeline
preprocess = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
])
img_preprocessed = preprocess(img)
return torch.unsqueeze(img_preprocessed, 0)
def load_labels():
#classes_text = requests.get(
# "https://raw.githubusercontent.com/cathyzhyi/ml-data/main/imagenet-classes.txt",
# stream=True,
#).text
with open("imagenet-classes.txt") as f:
classes_text = f.read()
labels = [line.strip() for line in classes_text.splitlines()]
return labels
def top3_possibilities(res):
_, indexes = torch.sort(res, descending=True)
percentage = torch.nn.functional.softmax(res, dim=1)[0] * 100
top3 = [(labels[idx], percentage[idx].item()) for idx in indexes[0][:3]]
return top3
def predictions(torch_func, jit_func, img, labels):
golden_prediction = top3_possibilities(torch_func(img))
print("PyTorch prediction")
print(golden_prediction)
prediction = top3_possibilities(torch.from_numpy(jit_func(img.numpy())))
print("torch-mlir prediction")
print(prediction)
image_url = "https://upload.wikimedia.org/wikipedia/commons/2/26/YellowLabradorLooking_new.jpg"
print("load image from " + image_url, file=sys.stderr)
img = load_and_preprocess_image(image_url)
labels = load_labels()
backbone = torchvision.models.mobilenet_v2(pretrained=True).features
backbone.out_channels = 1280
anchor_generator = AnchorGenerator(sizes=((32, 64, 128, 256, 512),),
aspect_ratios=((0.5, 1.0, 2.0),))
roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=['0'],
output_size=7,
sampling_ratio=2)
mask_roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=['0'],
output_size=14,
sampling_ratio=2)
maskrcnn = models.MaskRCNN(backbone, num_classes=2,)
#maskrcnn = models.MaskRCNN(backbone,
# num_classes=2,
# rpn_anchor_generator=anchor_generator,
# box_roi_pool=roi_pooler,
# mask_roi_pool=mask_roi_pooler)
maskrcnn.train(False)
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
module = torch_mlir.compile(maskrcnn, x, output_type=torch_mlir.OutputType.LINALG_ON_TENSORS)
backend = refbackend.RefBackendLinalgOnTensorsBackend()
compiled = backend.compile(module)
jit_module = backend.load(compiled)
predictions(maskrcnn.forward, jit_module.forward, img, labels)
the output is
load image from https://upload.wikimedia.org/wikipedia/commons/2/26/YellowLabradorLooking_new.jpg
/home/mlir_venv/lib/python3.8/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
warnings.warn(
/home/mlir_venv/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=MobileNet_V2_Weights.IMAGENET1K_V1`. You can also use `weights=MobileNet_V2_Weights.DEFAULT` to get the most up-to-date weights.
warnings.warn(msg)
error: unhandled: could not import node: %24 : bool = prim::isinstance[types=[Tensor]](%boxes.1)
Traceback (most recent call last):
File "examples/torchscript_maskrcnn.py", line 96, in <module>
module = torch_mlir.compile(maskrcnn, x, output_type=torch_mlir.OutputType.LINALG_ON_TENSORS)
File "/mnt/mlir-npcomp/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir/__init__.py", line 144, in compile
mb.import_module(scripted._c, class_annotator)
RuntimeError: see diagnostics
can someone tell me how to solve this error? thank you very much
can you please try to rm -rf libtorch/ and build/ and rebuild ?
no help,why you suggest me do this? @powderluv
@LucQueen there is an outstanding issue where your libtorch/ could be out of date so I wanted to make sure you are not hitting that issue. I have a fix up for review so once that lands it shouldn't matter.
I'm using the latest torch-mlir code,and I still meet this problem. I'm very glad to know you have a fix up,can you share me the fix code?and when the fix can lands? @powderluv
I'm hitting a similar error when I run ./tools/torchscript_e2e_test.sh:
Exception:
PyTorch TorchScript module -> torch-mlir Object Graph IR import failed with:
Exception:
see diagnostics
Diagnostics:
error: unhandled: could not import node: %6 : bool = prim::Constant[value=0]()
@powderluv Is this related? Can you point us to the solution you have for this?
I already did a "rm -rf libtorch/ and build/ and rebuild" and that didn't help.
@navahgar @LucQueen -- do you have steps for reproducing? Our CI is green at head, so we will need a bit more information for reproducing.
This seems like a weird issue -- @navahgar that prim::Constant should be handled here, so I think it is some weird build/config issue: https://github.com/llvm/torch-mlir/blob/874fdb7e429175b701602e08df027f756bdf6ba9/python/torch_mlir/dialects/torch/importer/jit_ir/csrc/node_importer.cpp#L172
# Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
# Also available under a BSD-style license. See LICENSE.
import sys
from PIL import Image
import requests
import torch
import torchvision
# import torchvision.models as models
import torchvision.models.detection as models
from torchvision import transforms
import torch_mlir
from torch_mlir_e2e_test.linalg_on_tensors_backends import refbackend
from torchvision.models.detection.anchor_utils import AnchorGenerator
def load_and_preprocess_image(url: str):
headers = {
'User-Agent':
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'
}
url = "./YellowLabradorLooking_new.jpeg"
#img = Image.open(requests.get(url, headers=headers,
# stream=True, verify=False).raw).convert("RGB")
img = Image.open(url)
# preprocessing pipeline
preprocess = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
])
img_preprocessed = preprocess(img)
return torch.unsqueeze(img_preprocessed, 0)
def load_labels():
#classes_text = requests.get(
# "https://raw.githubusercontent.com/cathyzhyi/ml-data/main/imagenet-classes.txt",
# stream=True,
#).text
with open("imagenet-classes.txt") as f:
classes_text = f.read()
labels = [line.strip() for line in classes_text.splitlines()]
return labels
def top3_possibilities(res):
_, indexes = torch.sort(res, descending=True)
percentage = torch.nn.functional.softmax(res, dim=1)[0] * 100
top3 = [(labels[idx], percentage[idx].item()) for idx in indexes[0][:3]]
return top3
def predictions(torch_func, jit_func, img, labels):
golden_prediction = top3_possibilities(torch_func(img))
print("PyTorch prediction")
print(golden_prediction)
prediction = top3_possibilities(torch.from_numpy(jit_func(img.numpy())))
print("torch-mlir prediction")
print(prediction)
image_url = "https://upload.wikimedia.org/wikipedia/commons/2/26/YellowLabradorLooking_new.jpg"
print("load image from " + image_url, file=sys.stderr)
img = load_and_preprocess_image(image_url)
labels = load_labels()
backbone = torchvision.models.mobilenet_v2(pretrained=True).features
backbone.out_channels = 1280
anchor_generator = AnchorGenerator(sizes=((32, 64, 128, 256, 512),),
aspect_ratios=((0.5, 1.0, 2.0),))
roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=['0'],
output_size=7,
sampling_ratio=2)
mask_roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=['0'],
output_size=14,
sampling_ratio=2)
maskrcnn = models.MaskRCNN(backbone, num_classes=2,)
#maskrcnn = models.MaskRCNN(backbone,
# num_classes=2,
# rpn_anchor_generator=anchor_generator,
# box_roi_pool=roi_pooler,
# mask_roi_pool=mask_roi_pooler)
maskrcnn.train(False)
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
module = torch_mlir.compile(maskrcnn, x, output_type=torch_mlir.OutputType.LINALG_ON_TENSORS)
backend = refbackend.RefBackendLinalgOnTensorsBackend()
compiled = backend.compile(module)
jit_module = backend.load(compiled)
predictions(maskrcnn.forward, jit_module.forward, img, labels)
@silvasean use above code can reproduce
I had a similar issue to this, and what solved it was doing rm -rf libtorch* and rm -rf build (note the asterisk after libtorch, since there is also a .zip file that needs to be removed). Then running python -m pip install -r requirements.txt --upgrade to get the latest pytorch. Then rebuilding torch-mlir.
I had a similar issue to this, and what solved it was doing
rm -rf libtorch*andrm -rf build(note the asterisk afterlibtorch, since there is also a.zipfile that needs to be removed). Then runningpython -m pip install -r requirements.txt --upgradeto get the latest pytorch. Then rebuildingtorch-mlir.
That actually fixes it for me. It looks like the issue was I didn't have the latest pytorch.