coremltools icon indicating copy to clipboard operation
coremltools copied to clipboard

experimental.linear_quantize_activations with Classifier_config in ct.conver "fails" w/message 'dict' object has no attribute 'flatten'

Open dessatel opened this issue 1 year ago β€’ 3 comments

🐞Describing the bug

when coreML model is converted with classifier_config= activation quantization with linear_quantize_activations prints error: Running compression pass linear_quantize_activations: calibrating sample 3/20 fails. 'dict' object has no attribute 'flatten' It does not affect the actual results, other than slowing down the quantization process

Stack Trace

  • If applicable, please paste the complete stack trace.

To Reproduce

import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision.transforms as transforms
import coremltools as ct
import coremltools.optimize as cto
from PIL import Image
import numpy as np
import requests

torch.manual_seed(0)
torch.use_deterministic_algorithms(True)

model = torch.hub.load('pytorch/vision:v0.10.0', 'resnet50', pretrained=True)
model.eval()

transform = transforms.Compose([
    transforms.Resize((224, 224)),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])

sample_data = []

for _ in range(20):
    # Create a white image
    image = Image.new('RGB', (224, 224), (255, 255, 255))
    sample_data.append({"x_1": image})

print(f"Total images in sample_data: {len(sample_data)}")

input_tensor = transform(image).unsqueeze(0)  # Add batch dimension

with torch.no_grad():
    output = model(input_tensor)
scores_pytorch = output.numpy().squeeze()

labels_url = "https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt"
response = requests.get(labels_url)
class_labels = response.text.splitlines()
class_labels = [label for label in class_labels if label]

image_input = ct.ImageType(shape=(1, 3, 224, 224), bias=[-2.117, -2.035, -1.804], scale=1/255/0.229)
traced_model = torch.jit.trace(model, input_tensor)

coreml_model_iOS17 = ct.convert(
    traced_model,
    inputs=[image_input],
    classifier_config=ct.ClassifierConfig(class_labels=class_labels),
    minimum_deployment_target=ct.target.iOS17
)

activation_config_iOS17 = cto.coreml.OptimizationConfig(
    global_config=cto.coreml.experimental.OpActivationLinearQuantizerConfig(
        mode="linear_symmetric"
    )
)
compressed_model_a8_iOS17 = cto.coreml.experimental.linear_quantize_activations(
    coreml_model_iOS17, activation_config_iOS17, sample_data
)

weight_config_int8_iOS17 = cto.coreml.OptimizationConfig(
    global_config=cto.coreml.OpLinearQuantizerConfig(
        mode="linear_symmetric", dtype=ct.converters.mil.mil.types.int8
    )
)

compressed_model_w8a8_iOS17 = cto.coreml.linear_quantize_weights(compressed_model_a8_iOS17, weight_config_int8_iOS17)

compressed_model_w8a8_iOS17.save("resnet50-A8-iOS17.mlpackage")

dessatel avatar Jun 24 '24 03:06 dessatel

Your code runs fine for me using coremltools version 8.0b1.

TobyRoseman avatar Jul 03 '24 00:07 TobyRoseman

I'm getting bunch of Running compression pass linear_quantize_activations: calibrating sample 3/20 fails. in Jupyter lab

yes, it is 8.0b1 , experimental linear_quantize_activations is only available on 8.0b1.... macOS 15, pip install torch==2.3.0 coremltools==8.0b1 torchvision torchaudio scikit-learn==1.1.2

Using cache found in [/Users/dessa/.cache/torch/hub/pytorch_vision_v0.10.0](http://localhost:8888/Users/streambox/.cache/torch/hub/pytorch_vision_v0.10.0)
[/Users/dessa/SourceRelease/GITHUB/ML_playground/OPT-1.3B/opt3/lib/python3.10/site-packages/torchvision/models/_utils.py:208](http://localhost:8888/Users/streambox/SourceRelease/GITHUB/ML_playground/OPT-1.3B/opt3/lib/python3.10/site-packages/torchvision/models/_utils.py#line=207): UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
  warnings.warn(
[/Users/dessa/SourceRelease/GITHUB/ML_playground/OPT-1.3B/opt3/lib/python3.10/site-packages/torchvision/models/_utils.py:223](http://localhost:8888/Users/streambox/SourceRelease/GITHUB/ML_playground/OPT-1.3B/opt3/lib/python3.10/site-packages/torchvision/models/_utils.py#line=222): UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet50_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet50_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Total images in sample_data: 20
Converting PyTorch Frontend ==> MIL Ops: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 440[/441](http://localhost:8888/441) [00:00<00:00, 6495.45 ops[/s](http://localhost:8888/s)]
Running MIL frontend_pytorch pipeline: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5[/5](http://localhost:8888/5) [00:00<00:00, 162.77 passes[/s](http://localhost:8888/s)]
Running MIL default pipeline:   0%|                                                                   | 0[/79](http://localhost:8888/79) [00:00<?, ? passes[/s](http://localhost:8888/s)][/Users/dessa/SourceRelease/GITHUB/ML_playground/OPT-1.3B/opt3/lib/python3.10/site-packages/coremltools/converters/mil/mil/passes/defs/preprocess.py:239](http://localhost:8888/Users/streambox/SourceRelease/GITHUB/ML_playground/OPT-1.3B/opt3/lib/python3.10/site-packages/coremltools/converters/mil/mil/passes/defs/preprocess.py#line=238): UserWarning: Input, 'x.1', of the source model, has been renamed to 'x_1' in the Core ML model.
  warnings.warn(msg.format(var.name, new_name))
Running MIL default pipeline: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 79[/79](http://localhost:8888/79) [00:01<00:00, 60.91 passes[/s](http://localhost:8888/s)]
Running MIL backend_mlprogram pipeline: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 12[/12](http://localhost:8888/12) [00:00<00:00, 275.26 passes[/s](http://localhost:8888/s)]
<class 'coremltools.optimize.coreml.experimental._quantization_passes.insert_prefix_quantize_dequantize_pair'>
Running activation compression pass insert_prefix_quantize_dequantize_pair: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 465[/465](http://localhost:8888/465) [00:00<00:00, 8068.17 ops[/s](http://localhost:8888/s)]
Running compression pass linear_quantize_activations: start calibrating 20 samples
Running compression pass linear_quantize_activations: calibration may take a while ...
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 1[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 2[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 3[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 4[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 5[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 6[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 7[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 8[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 9[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 10[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 11[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 12[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 13[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 14[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 15[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 16[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 17[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 18[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 19[/20](http://localhost:8888/20) fails.
'dict' object has no attribute 'flatten'
Running compression pass linear_quantize_activations: calibrating sample 20[/20](http://localhost:8888/20) fails.
Running MIL frontend_milinternal pipeline: 0 passes [00:00, ? passes[/s](http://localhost:8888/s)]
Running MIL default pipeline: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 77[/77](http://localhost:8888/77) [00:01<00:00, 75.08 passes[/s](http://localhost:8888/s)]
Running MIL backend_mlprogram pipeline: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 12[/12](http://localhost:8888/12) [00:00<00:00, 140.01 passes[/s](http://localhost:8888/s)]
<class 'coremltools.optimize.coreml._quantization_passes.linear_quantize_weights'>
Running compression pass linear_quantize_weights:   0%|                                                 | 0[/108](http://localhost:8888/108) [00:00<?, ? ops[/s](http://localhost:8888/s)][/Users/dessa/SourceRelease/GITHUB/ML_playground/OPT-1.3B/opt3/lib/python3.10/site-packages/coremltools/optimize/coreml/_utils.py:88](http://localhost:8888/Users/streambox/SourceRelease/GITHUB/ML_playground/OPT-1.3B/opt3/lib/python3.10/site-packages/coremltools/optimize/coreml/_utils.py#line=87): RuntimeWarning: invalid value encountered in divide
  quantized_data = np.round(weight [/](http://localhost:8888/) scale)
[/Users/dessa/SourceRelease/GITHUB/ML_playground/OPT-1.3B/opt3/lib/python3.10/site-packages/coremltools/optimize/coreml/_utils.py:88](http://localhost:8888/Users/streambox/SourceRelease/GITHUB/ML_playground/OPT-1.3B/opt3/lib/python3.10/site-packages/coremltools/optimize/coreml/_utils.py#line=87): RuntimeWarning: divide by zero encountered in divide
  quantized_data = np.round(weight [/](http://localhost:8888/) scale)
Running compression pass linear_quantize_weights: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 108[/108](http://localhost:8888/108) [00:00<00:00, 163.21 ops[/s](http://localhost:8888/s)]
Running MIL frontend_milinternal pipeline: 0 passes [00:00, ? passes[/s](http://localhost:8888/s)]
Running MIL default pipeline: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 77[/77](http://localhost:8888/77) [00:00<00:00, 90.07 passes[/s](http://localhost:8888/s)]
Running MIL backend_mlprogram pipeline: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 12[/12](http://localhost:8888/12) [00:00<00:00, 116.13 passes[/s](http://localhost:8888/s)]

dessatel avatar Jul 05 '24 21:07 dessatel

I'm on macOS 14 and it works for me.

'dict' object has no attribute 'flatten' is not a very helpful error. Looks like we're catching the original exception. You could try removing that. So we can get a stack trace for the original issue.

TobyRoseman avatar Jul 05 '24 22:07 TobyRoseman