coremltools icon indicating copy to clipboard operation
coremltools copied to clipboard

Support multiple outputs in extract_submodel for >=iOS 16

Open smpanaro opened this issue 7 months ago • 3 comments

Using the extract_submodel debugging_util to add additional outputs fails for deployment target >=iOS 16 when the model is converted from PyTorch.

Running this script:

import coremltools as ct
from coremltools.converters.mil.debugging_utils import extract_submodel
import torch
from torch import nn
import numpy as np

class Net(nn.Module):
    def forward(self, x):
        x = x * x

        chunks = x.chunk(5, dim=-1)
        transformed = []
        for i in range(len(chunks)):
            transformed.append(chunks[i] * i)

        x = torch.cat(transformed, dim=-1)
        x = x ** 0.5
        return x

sample_input = torch.randn(1,32,1,512)
full_model = ct.convert(torch.jit.trace(Net().eval(), sample_input),
                        inputs=[ct.TensorType(shape=sample_input.shape, dtype=np.float16)],
                        minimum_deployment_target=ct.target.iOS16,
                        convert_to="mlprogram")
print("Full model:")
print(full_model._mil_program)
full_model.save("full_model.mlpackage")

# var_22 is the original output. var_15_cast_fp16 is an intermediate tensor that is being added as an output.
submodel = extract_submodel(full_model, outputs=["var_22", "var_15_cast_fp16"])
print("Submodel:")
print(submodel._mil_program)
submodel.save("submodel.mlpackage")

On 8.0b1:

Full model:

main[CoreML6](%x_1: (1, 32, 1, 512, fp16)(Tensor)) {
  block0() {
    %x_cast_fp16: (1, 32, 1, 512, fp16)(Tensor) = mul(x=%x_1, y=%x_1, name="x_cast_fp16")
    %var_3_cast_fp16_0: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_1: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_2: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_3: (1, 32, 1, 103, fp16)(Tensor), %var_3_cast_fp16_4: (1, 32, 1, 100, fp16)(Tensor) = split(x=%x_cast_fp16, split_sizes=[103, 103, 103, 103, 100], axis=-1, name="op_3_cast_fp16")
    %var_9_cast_fp16: (1, 32, 1, 103, fp16)(Tensor) = mul(x=%var_3_cast_fp16_0, y=0.0, name="op_9_cast_fp16")
    %var_13_cast_fp16: (1, 32, 1, 103, fp16)(Tensor) = mul(x=%var_3_cast_fp16_2, y=2.0, name="op_13_cast_fp16")
    %var_15_cast_fp16: (1, 32, 1, 103, fp16)(Tensor) = mul(x=%var_3_cast_fp16_3, y=3.0, name="op_15_cast_fp16")
    %var_17_cast_fp16: (1, 32, 1, 100, fp16)(Tensor) = mul(x=%var_3_cast_fp16_4, y=4.0, name="op_17_cast_fp16")
    %var_20_cast_fp16: (1, 32, 1, 512, fp16)(Tensor) = concat(values=(%var_9_cast_fp16, %var_3_cast_fp16_1, %var_13_cast_fp16, %var_15_cast_fp16, %var_17_cast_fp16), axis=-1, interleave=False, name="op_20_cast_fp16")
    %var_22: (1, 32, 1, 512, fp16)(Tensor) = pow(x=%var_20_cast_fp16, y=0.5, name="op_22_cast_fp16")
  } -> (%var_22)
}

Running MIL frontend_milinternal pipeline: 0 passes [00:00, ? passes/s]
Running MIL default pipeline: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 79/79 [00:00<00:00, 11800.21 passes/s]
Running MIL backend_mlprogram pipeline: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 11387.25 passes/s]
Traceback (most recent call last):
  File "/[removed]/submodel.py", line 29, in <module>
    submodel = extract_submodel(full_model, outputs=["var_22", "var_15_cast_fp16"])
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/debugging_utils.py", line 173, in extract_submodel
    submodel = ct.convert(prog, convert_to=backend, compute_units=model.compute_unit)
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/_converters_entry.py", line 635, in convert
    mlmodel = mil_convert(
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 188, in mil_convert
    return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 212, in _mil_convert
    proto, mil_program = mil_convert_to_proto(
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 307, in mil_convert_to_proto
    out = backend_converter(prog, **kwargs)
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 130, in __call__
    return backend_load(*args, **kwargs)
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/backend/mil/load.py", line 1072, in load
    return coreml_proto_exporter.export(specification_version)
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/backend/mil/load.py", line 1008, in export
    func_to_output[name] = self.get_func_output(func)
  File "/[removed]/env/lib/python3.10/site-packages/coremltools/converters/mil/backend/mil/load.py", line 843, in get_func_output
    assert len(output_types) == len(
AssertionError: number of mil program outputs do not match the number of outputs provided by the user

The issues seems to be that the original output has an entry in output_types but the new output does not.

I'm not sure if there is a better way to fix this. It won't work for Image outputs. It seems like passing None to set_output_types would also work. Happy to make changes if needed.

smpanaro avatar Jul 08 '24 01:07 smpanaro