coremltools icon indicating copy to clipboard operation
coremltools copied to clipboard

Question re. ANE Usage with Flexible Input Shapes

Open rsomani95 opened this issue 2 years ago • 4 comments

❓Question

Not sure if this is a framework issue, or one with coremltools. My hunch is the latter, so I'm asking here.

I've exported a model that requires a flexible input shape, and set the default shape to 1. This model doesn't use the ANE at all, and only runs on CPU.

Out of curiosity, I set the input shape to be fixed to 1 to see if the model would run faster. This model uses the GPU / ANE and is significantly faster. Does this mean that ANE usage is out of the window with flexible input shapes, or is there scope to redefine the model to allow it to use the ANE with flexible shapes too?

Unfortunately, I cannot share the model definition publicly.

Fixed input shape:

CleanShot 2023-02-09 at 18 46 12

Flexible input shape:

CleanShot 2023-02-09 at 18 46 15

rsomani95 avatar Feb 09 '23 17:02 rsomani95

Not sure if this is a framework issue, or one with coremltools. My hunch is the latter, so I'm asking here.

I think this is much more likely to be an issue with the Core ML Framework. At a high level the coremltools package takes a source model (i.e. a TensorFlow or PyTorch model) and converts that to MIL Ops. The Core ML Framework decides which devices (i.e. CPU, GPU, ANE) runs each op.

For help with the Core ML Framework, you could post or search previous posts in the Apple Developer Forum. Submitting this issue via Feedback Assistant would also be good.

Without steps to reproduce this issue, I don't think there is much we can do here.

TobyRoseman avatar Feb 10 '23 20:02 TobyRoseman

Filed internal report FB12038163

vade avatar Mar 06 '23 21:03 vade

As discussed in #1763 , the model should continue to use ANE with EnumeratedShapes. Unless using the flexible input shapes causes some layers to be dynamic in which case they might not be supported on the neural engine. If the ops are exactly the same between the static and flexible models (say a fully convolutional model) and the static model runs on the NE but the enumerated shaped flexible model does not, then its likely a bug.

aseemw avatar Apr 04 '23 16:04 aseemw

It seems like just a single conv2d -> relu, when converted with enumerated shape creates dynamic tensors which runs on CPU. What is the best way to get EnumeratedShapes working with ANE?

import torch
import torch.nn as nn
import torch.nn.functional as F
import coremltools as ct
import numpy as np


class Test(nn.Module):
    def __init__(self):
        super(Test, self).__init__()
        self.conv = nn.Conv2d(1, 1, kernel_size=3, stride=1, padding=1)
        self.relu = nn.ReLU()

    def forward(self, x):
        return self.relu(self.conv(x))
    
IMAGE_WIDTH = 640
IMAGE_HEIGHT = 480

model = Test()
model.eval()

image = np.zeros((IMAGE_HEIGHT, IMAGE_WIDTH), dtype=np.float32)
image = torch.from_numpy(image)
image = torch.autograd.Variable(image).view(1, 1, IMAGE_HEIGHT, IMAGE_WIDTH)

traced_model = torch.jit.trace(model, (image))

input_shape = ct.EnumeratedShapes(shapes=[[1, 1, IMAGE_WIDTH, IMAGE_HEIGHT],
                                          [1, 1, IMAGE_HEIGHT, IMAGE_WIDTH]],
                                  default=[1, 1, IMAGE_WIDTH, IMAGE_HEIGHT])


coreml_model = ct.convert(traced_model,
                            convert_to="mlprogram",
                            compute_precision=ct.precision.FLOAT16,
                            minimum_deployment_target=ct.target.macOS13,
                            inputs=[ct.ImageType(name="input", color_layout=ct.colorlayout.GRAYSCALE_FLOAT16, shape=input_shape)],
                            outputs=[ct.ImageType(name="output", color_layout=ct.colorlayout.GRAYSCALE_FLOAT16)])
coreml_model.save(f"Test.mlpackage")

OkanArikan avatar Dec 17 '24 01:12 OkanArikan