Importing an ONNX OPT model runs into error while lowering to torch MLIR
Import of ONNX model into torch MLIR using onnx importer is successful.
But hits an error while lowering to torch MLIR:
But, now I hit opt-125M.fp32.torch-onnx.mlir:244:12: error: failed to legalize operation 'torch.operator' that was explicitly marked illegal %241 = torch.operator "onnx.LayerNormalization"(%240, %8, %9) {torch.onnx.axis = -1 : si64, torch.onnx.epsilon = 9.99999974E-6 : f32} : (!torch.vtensor<[1,6,768],f32>, !torch.vtensor<[768],f32>, !torch.vtensor<[768],f32>) -> !torch.vtensor<[1,6,768],f32>
Steps to reproduce:
Steps to reproduce the issue: a)Save following code as model.py:
from transformers import OPTModel, AutoTokenizer
import torch
class optModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.model = OPTModel.from_pretrained(
"facebook/opt-125M",
num_labels=2,
output_attentions=False,
output_hidden_states=False,
torchscript=True,
)
self.model.eval()
def forward(self, tokens):
return self.model.forward(tokens)[0]
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-125M")
test_input = torch.tensor([tokenizer.encode("The Manhattan bridge")])
model = optModel()
onnx_program = torch.onnx.export(model, test_input, "opt-125M.onnx")
b) run: 'python ./model.py' to get opt-125M.onnx 9' (This may take a minute) c) run: 'python -m torch_mlir.tools.import_onnx opt-125M.onnx -o opt-125M.torch-onnx.mlir d) <path to your torch MLIR build dir>/bin/torch-mlir-opt -convert-torch-onnx-to-torch opt-125M.fp32.torch-onnx.mlir > opt-125M.fp32.onnx.torch.mlir
https://github.com/llvm/torch-mlir/pull/2789 should fix it