coremltools icon indicating copy to clipboard operation
coremltools copied to clipboard

Unable to return `int32` output

Open olokobayusuf opened this issue 2 years ago β€’ 4 comments

🐞Describing the bug

It seems that has non float32 outputs will always get cast to float32. This breaks segmentation models that return per-pixel indices. Here's the relevant source code:

https://github.com/apple/coremltools/blob/973eae67f2f273a29e80a9b009987516a070a58b/coremltools/converters/mil/backend/nn/passes/alert_return_type_cast.py#L13-L20

This contrasts with the fact that CoreML does support Int32 return types. And this is further contrasted by the fact that coremltools itself supports int32 outputs:

https://github.com/apple/coremltools/blob/973eae67f2f273a29e80a9b009987516a070a58b/coremltools/converters/mil/backend/mil/passes/adjust_io_to_supported_types.py#L126-L133

Why the discrepancy?

To Reproduce

Run:

from coremltools import convert, TensorType
from torch import int32, randn
from torch.jit import trace
from torch.nn import Module

class Model (Module):

    def __init__ (self):
        super().__init__()

    def forward (self, input0):
        return input0.to(int32)


example_input = randn(1, 256, 256, 3)
model = Model()
scripted_model = trace(model, [example_input])

coreml_model = convert(scripted_model, inputs=[TensorType(shape=example_input.shape)])

And observe:

Running MIL Common passes: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 37/37 [00:00<00:00, 616.52 passes/s]
Running MIL Clean up passes: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 9/9 [00:00<00:00, 8418.54 passes/s]
WARNING:root:Output var var_6 of type int32 in function main is cast to type fp32

System environment (please complete the following information):

  • coremltools version: 5.2.0 and 6.0b1
  • OS (e.g. MacOS version or Linux type): macOS 12.3.1
  • Any other relevant version information (e.g. PyTorch or TensorFlow version): Torch 1.11

olokobayusuf avatar Jul 04 '22 16:07 olokobayusuf

Thanks for the minimal example. I can reproduce the issue.

This is not just a neural network backend issue. The following also produces a Core ML model with float output:

coreml_model = convert(
    scripted_model,
    inputs=[TensorType(shape=example_input.shape)],
    convert_to='mlprogram'
)

Also even if we explicitly specify int output, we still get float output. The following also produces a Core ML model with float output:

coreml_model = convert(
    scripted_model,
    inputs=[TensorType(shape=example_input.shape)],
    outputs=[TensorType(dtype=np.int32)],
    convert_to='mlprogram'
)

TobyRoseman avatar Jul 05 '22 19:07 TobyRoseman

What's up @TobyRoseman . Yes, your observation is correct and I had tried it out, but forgot to mention it in the original post. I'd come up with a fix and PR but I'm not familiar enough with the architecture of coremltools, and I don't have time to study it. Any ideas on how easy this is to fix, and what it would entail?

olokobayusuf avatar Jul 05 '22 19:07 olokobayusuf

Hi @olokobayusuf - I don't understand that you are saying. You have a fix for this issue? If so, please put up a PR. I can help you get it properly into the code base.

TobyRoseman avatar Jul 05 '22 21:07 TobyRoseman

No I said I would have tried to come up with a fix, but I'm not familiar with the architecture of the codebase and don't have the time. So I'm asking if you have any ideas on what the fix is. I do not know how to fix this.

olokobayusuf avatar Jul 05 '22 23:07 olokobayusuf