coremltools icon indicating copy to clipboard operation
coremltools copied to clipboard

RuntimeError: PyTorch convert function for op 'normal_' not implemented.

Open StarxSky opened this issue 2 years ago • 5 comments

🐞Describing the bug

Hello, after I debugged my last problem, I encountered the following error when converting the GPT language model. It should be that COREMLTOOLS has not yet supported this Pytorch operation. I hope it can be corrected and added.

Stack Trace

Mismatched elements: 262139 / 262144 (100.0%) Greatest absolute difference: 3.099862426519394 at index (0, 418, 0, 24) (up to 1e-05 allowed) Greatest relative difference: 59928.32389642812 at index (1, 471, 0, 200) (up to 1e-05 allowed)

_check_trace( Converting Frontend ==> MIL Ops: 14%|█▍ | 169/1208 [00:00<00:00, 1312.21 ops/s]

Ran 1 test in 27.315s

FAILED (errors=1)

Error Traceback (most recent call last): File "C:\Users\xbj0916\Desktop\ANE-GPT-main\test_GPT.py", line 84, in test_coreml_conversion_and_speedup ANE_Model_CoreML_Model = ct.convert( File "C:\Program Files\Anaconda\envs\ML\lib\site-packages\coremltools\converters_converters_entry.py", line 326, in convert mlmodel = mil_convert( File "C:\Program Files\Anaconda\envs\ML\lib\site-packages\coremltools\converters\mil\converter.py", line 182, in mil_convert return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs) File "C:\Program Files\Anaconda\envs\ML\lib\site-packages\coremltools\converters\mil\converter.py", line 209, in _mil_convert proto, mil_program = mil_convert_to_proto( File "C:\Program Files\Anaconda\envs\ML\lib\site-packages\coremltools\converters\mil\converter.py", line 300, in mil_convert_to_proto prog = frontend_converter(model, **kwargs) File "C:\Program Files\Anaconda\envs\ML\lib\site-packages\coremltools\converters\mil\converter.py", line 104, in call return load(*args, **kwargs) File "C:\Program Files\Anaconda\envs\ML\lib\site-packages\coremltools\converters\mil\frontend\torch\load.py", line 50, in load return _perform_torch_convert(converter, debug) File "C:\Program Files\Anaconda\envs\ML\lib\site-packages\coremltools\converters\mil\frontend\torch\load.py", line 95, in _perform_torch_convert raise e File "C:\Program Files\Anaconda\envs\ML\lib\site-packages\coremltools\converters\mil\frontend\torch\load.py", line 87, in perform_torch_convert prog = converter.convert() File "C:\Program Files\Anaconda\envs\ML\lib\site-packages\coremltools\converters\mil\frontend\torch\converter.py", line 239, in convert convert_nodes(self.context, self.graph) File "C:\Program Files\Anaconda\envs\ML\lib\site-packages\coremltools\converters\mil\frontend\torch\ops.py", line 73, in convert_nodes raise RuntimeError( RuntimeError: PyTorch convert function for op 'normal' not implemented.

System environment (please complete the following information):

  • coremltools version: 5.1.0
  • Windows10
  • Pytorch 1.11.0

StarxSky avatar Jun 14 '22 03:06 StarxSky

Yes, looks like we do not support PyTorch's normal layer.

Here is a minimal example to reproduce the issue:

import torch
import coremltools as ct

class MyModel(torch.nn.Module):
    def forward(self, x):
        return torch.normal(mean=torch.arange(1., 11.), std=torch.arange(1, 0, -0.1))

m = MyModel().eval()
x = torch.Tensor([1, 2, 3])
m = torch.jit.trace(m, x)
ct.convert(m, inputs=[ct.TensorType(shape=x.shape)])

I think our random_normal MIL ops should be able to support at least some of this functionality.

TobyRoseman avatar Jun 14 '22 21:06 TobyRoseman

l can try ! & thank you!

StarxSky avatar Jun 15 '22 00:06 StarxSky

import json from torchvision.transforms.functional import normalize

import coremltools as ct import cv2 import torch from basicsr import img2tensor

from gfpgan import GFPGANer, GFPGANv1Clean

model_path = '/Volumes/cuigaofeng/IdeaProjects/roop-model/models/GFPGANv1.4.pth'

model = GFPGANv1Clean( out_size=512, num_style_feat=512, channel_multiplier=2, decoder_load_path=None, fix_decoder=False, num_mlp=8, input_is_latent=True, different_w=True, narrow=1, sft_half=True) loadnet = torch.load(model_path) if 'params_ema' in loadnet: keyname = 'params_ema' else: keyname = 'params' model.load_state_dict(loadnet[keyname], strict=True) model.eval()

img = cv2.imread('/Volumes/cuigaofeng/IdeaProjects/roop-model/inputs/input_head.jpg') img = cv2.resize(img, (512, 512)) cropped_face_t = img2tensor(img / 255., bgr2rgb=True, float32=True) normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True) cropped_face_t = cropped_face_t.unsqueeze(0) traced_model = torch.jit.trace(model, cropped_face_t)

_input = ct.ImageType( name="x", shape=cropped_face_t.shape, bias=[-0.485/0.229,-0.456/0.224,-0.406/0.225], scale= 1./(255*0.226) )

mlmodel = ct.converters.convert( traced_model, inputs=[_input], )

labels_json = {"labels": ["background", "aeroplane", "bicycle", "bird", "board", "bottle", "bus", "car", "cat", "chair", "cow", "diningTable", "dog", "horse", "motorbike", "person", "pottedPlant", "sheep", "sofa", "train", "tvOrMonitor"]}

mlmodel.type = 'imageSegmenter' mlmodel.user_defined_metadata['com.apple.coreml.model.preview.params'] = json.dumps(labels_json)

mlmodel.save("SegmentationModel.mlmodel")

the same question~

RuntimeError: PyTorch convert function for op 'normal_' not implemented.

cgf120 avatar Aug 21 '23 12:08 cgf120

https://github.com/apple/coremltools/pull/1936 is an example about using mb.random_uniform to implement torch.rand op. It should be similar for the normal_ op by using mb.random_normal as suggested in https://github.com/apple/coremltools/issues/1528#issuecomment-1155737497. Feel free to try it.

junpeiz avatar Aug 23 '23 05:08 junpeiz

@cgf120 Did you solve the problem ?

Maybe this is a way to avoid using normal_ in GFPGAN :

# noise = out.new_empty(b, 1, h, w).normal_()
noise = torch.randn_like(out.new_empty(b, 1, h, w))

quqixun avatar Aug 30 '23 10:08 quqixun