PyTorch to CoreML model conversion does not handle the torch tensor narrow() operation.
Title:
PyTorch to CoreML model conversion does not handle the torch tensor narrow() operation.
Relevance:
See the question I filed for coremltools titled "Cannot properly convert PyTorch model to .mlmodel when there is a dynamic slice/resize/narrow involved." The narrow() missing-op bug that was found as part of the question.
Reproducible:
Yes
Testcase:
Attached testNarrow.txt
Setup:
Torch version : 1.5.0 CoreML tools version : 4.0b1
Log:
Converting Frontend ==> MIL Ops: 50%
Traceback (most recent call last):
File "testNarrow.py", line 42, in
Still an issue with coremltools 4.0b3 and PyTorch 1.6.0
This is still an issue with coremltools 5.0
Is there a custom op implementation that we can use in the meantime?
Is there a custom op implementation that we can use in the meantime?
I don't think so. You could try creating your own composite operators. We'd welcome any pull requests for this issue.
@TobyRoseman I believe this does the slicing as it happens in torch.narrow:
@register_torch_op(override=True)
def narrow(context, node):
data, dim, start, length = _get_inputs(context, node, expected=4)
data_shape = mb.shape(x=data).val
begin = [0]*len(data_shape)
end = [x for x in data_shape]
begin[dim.val] = start.val
end[dim.val] = start.val+length.val
out = mb.slice_by_index(x=data, begin=begin, end=end)
context.add(out, torch_name=node.name)
but even when it successfully completes and is added to the context, I get this downstream error (with no information about which node it is triggering on):
Converting Frontend ==> MIL Ops: 69%|█████▌ | 2030/2949 [00:03<00:01, 521.71 ops/s]
...
...
out_model = ct.convert(
File "<REDACTED>/.venv/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py", line 306, in convert
mlmodel = mil_convert(
File "<REDACTED>/.venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 175, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "<REDACTED>/.venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 202, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "<REDACTED>.venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 293, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "<REDACTED>.venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 103, in __call__
return load(*args, **kwargs)
File "<REDACTED>/.venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 80, in load
raise e
File "<REDACTED>.venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 72, in load
prog = converter.convert()
File "<REDACTED>.venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 230, in convert
convert_nodes(self.context, self.graph)
File "<REDACTED>.venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 70, in convert_nodes
_add_op(context, node)
File "<REDACTED>/.venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 2792, in index
raise NotImplementedError("Broadcasable tensor index not supported.")
NotImplementedError: Broadcasable tensor index not supported.
Any ideas what this might mean? Either for whether the implementation is correct, or if it might be a separate problem entirely?
Sorry @SaulAryehKohn I don't have any insights here. Have to tried unit testing your code in order to verify correctness? We have lots of example of unit testing Torch ops in test_torch_ops.py.
Hi,
Is there any progress regarding 'narrow' function implementation ? or any custom ops ? I am using coremltools v6 beta version but still get the following error:
RuntimeError: PyTorch convert function for op 'narrow' not implemented.
Best
Same issue here with dynamic input shape :grimacing:
This worked for me with dynamic shape inputs:
@register_torch_op(override=True)
def narrow(context, node):
data, dim, start, length = _get_inputs(context, node, expected=4)
if any_symbolic(data.shape):
data_shape = mb.shape(x=data).sym_val
else:
data_shape = mb.shape(x=data).val
begin = [0]*len(data_shape)
end = [x for x in data_shape]
begin[dim.val] = start.val
end[dim.val] = start.val+length.val
out = mb.slice_by_index(x=data, begin=begin, end=end)
if out.rank == 1:
out = mb.reshape(x=out, shape=(1,))
context.add(out, torch_name=node.name)
@zobertke - That looks good. Thanks for sharing. If you don't mind writing some unit tests, please put up a pull request.