coremltools icon indicating copy to clipboard operation
coremltools copied to clipboard

PyTorch to CoreML model conversion does not handle the torch tensor narrow() operation.

Open leovinus2001 opened this issue 5 years ago • 7 comments

Title:

PyTorch to CoreML model conversion does not handle the torch tensor narrow() operation.

Relevance:

See the question I filed for coremltools titled "Cannot properly convert PyTorch model to .mlmodel when there is a dynamic slice/resize/narrow involved." The narrow() missing-op bug that was found as part of the question.

Reproducible:

Yes

Testcase:

Attached testNarrow.txt

Setup:

Torch version : 1.5.0 CoreML tools version : 4.0b1

Log:

Converting Frontend ==> MIL Ops: 50%

Traceback (most recent call last): File "testNarrow.py", line 42, in inputs= [ ct.TensorType(name="input1", shape=dummy_input.shape) ] File "~/Library/Python/3.7/lib/python/site-packages/coremltools/converters/_converters_entry.py", line 299, in convert **kwargs File "~/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/converter.py", line 120, in _convert prog = frontend_converter(model, **kwargs) File "~/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/converter.py", line 62, in call return load(*args, **kwargs) File "~/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 84, in load raise e File "~/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 76, in load prog = converter.convert() File "~/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 302, in convert convert_nodes(self.context, self.graph) File "~/Library/Python/3.7/lib/python/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 52, in convert_nodes "PyTorch convert function for op {} not implemented".format(node.kind) RuntimeError: PyTorch convert function for op narrow not implemented

leovinus2001 avatar Jul 08 '20 14:07 leovinus2001

Still an issue with coremltools 4.0b3 and PyTorch 1.6.0

leovinus2001 avatar Aug 20 '20 15:08 leovinus2001

This is still an issue with coremltools 5.0

TobyRoseman avatar Oct 13 '21 23:10 TobyRoseman

Is there a custom op implementation that we can use in the meantime?

SaulAryehKohn avatar Oct 21 '21 21:10 SaulAryehKohn

Is there a custom op implementation that we can use in the meantime?

I don't think so. You could try creating your own composite operators. We'd welcome any pull requests for this issue.

TobyRoseman avatar Oct 21 '21 21:10 TobyRoseman

@TobyRoseman I believe this does the slicing as it happens in torch.narrow:

    @register_torch_op(override=True)
    def narrow(context, node):
        data, dim, start, length = _get_inputs(context, node, expected=4)
        data_shape = mb.shape(x=data).val
        begin = [0]*len(data_shape)
        end = [x for x in data_shape]
        begin[dim.val] = start.val
        end[dim.val] = start.val+length.val
        out = mb.slice_by_index(x=data, begin=begin, end=end)
        context.add(out, torch_name=node.name)

but even when it successfully completes and is added to the context, I get this downstream error (with no information about which node it is triggering on):

Converting Frontend ==> MIL Ops:  69%|█████▌  | 2030/2949 [00:03<00:01, 521.71 ops/s]
...
...
    out_model = ct.convert(
  File "<REDACTED>/.venv/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py", line 306, in convert
    mlmodel = mil_convert(
  File "<REDACTED>/.venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 175, in mil_convert
    return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
  File "<REDACTED>/.venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 202, in _mil_convert
    proto, mil_program = mil_convert_to_proto(
  File "<REDACTED>.venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 293, in mil_convert_to_proto
    prog = frontend_converter(model, **kwargs)
  File "<REDACTED>.venv/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 103, in __call__
    return load(*args, **kwargs)
  File "<REDACTED>/.venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 80, in load
    raise e
  File "<REDACTED>.venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 72, in load
    prog = converter.convert()
  File "<REDACTED>.venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 230, in convert
    convert_nodes(self.context, self.graph)
  File "<REDACTED>.venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 70, in convert_nodes
    _add_op(context, node)
  File "<REDACTED>/.venv/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 2792, in index
    raise NotImplementedError("Broadcasable tensor index not supported.")
NotImplementedError: Broadcasable tensor index not supported.

Any ideas what this might mean? Either for whether the implementation is correct, or if it might be a separate problem entirely?

SaulAryehKohn avatar Oct 22 '21 18:10 SaulAryehKohn

Sorry @SaulAryehKohn I don't have any insights here. Have to tried unit testing your code in order to verify correctness? We have lots of example of unit testing Torch ops in test_torch_ops.py.

TobyRoseman avatar Oct 22 '21 18:10 TobyRoseman

Hi,

Is there any progress regarding 'narrow' function implementation ? or any custom ops ? I am using coremltools v6 beta version but still get the following error:

RuntimeError: PyTorch convert function for op 'narrow' not implemented.

Best

DenizD avatar Jul 06 '22 12:07 DenizD

Same issue here with dynamic input shape :grimacing:

zobertke avatar Feb 24 '23 17:02 zobertke

This worked for me with dynamic shape inputs:

@register_torch_op(override=True)
def narrow(context, node):
    data, dim, start, length = _get_inputs(context, node, expected=4)
    if any_symbolic(data.shape):
        data_shape = mb.shape(x=data).sym_val
    else:
        data_shape = mb.shape(x=data).val
    begin = [0]*len(data_shape)
    end = [x for x in data_shape]
    begin[dim.val] = start.val
    end[dim.val] = start.val+length.val
    out = mb.slice_by_index(x=data, begin=begin, end=end)
    if out.rank == 1:
        out = mb.reshape(x=out, shape=(1,))
    context.add(out, torch_name=node.name)

zobertke avatar Mar 01 '23 16:03 zobertke

@zobertke - That looks good. Thanks for sharing. If you don't mind writing some unit tests, please put up a pull request.

TobyRoseman avatar Mar 01 '23 22:03 TobyRoseman