onnx2keras icon indicating copy to clipboard operation
onnx2keras copied to clipboard

KeyError: 'min'

Open e2r-htz opened this issue 5 years ago • 15 comments

KeyError                                  Traceback (most recent call last)
<ipython-input-22-6be9743dbc2b> in <module>
      1 from onnx2keras import onnx_to_keras
      2 model=onnx.load("optimized_mobile_pydnet.onnx")
----> 3 k_model = onnx_to_keras(onnx_model=model, input_names=['input'])

~/anaconda3/envs/e2r/lib/python3.7/site-packages/onnx2keras/converter.py in onnx_to_keras(onnx_model, input_names, input_shapes, name_policy, verbose, change_ordering)
    179             lambda_funcs,
    180             node_name,
--> 181             keras_names
    182         )
    183         if isinstance(keras_names, list):

~/anaconda3/envs/e2r/lib/python3.7/site-packages/onnx2keras/operation_layers.py in convert_clip(node, params, layers, lambda_func, node_name, keras_name)
     29     input_0 = ensure_tf_type(layers[node.input[0]], name="%s_const" % keras_name)
     30 
---> 31     if params['min'] == 0:
     32         logger.debug("Using ReLU({0}) instead of clip".format(params['max']))
     33         layer = keras.layers.ReLU(max_value=params['max'], name=keras_name)

KeyError: 'min'```

e2r-htz avatar Sep 10 '20 14:09 e2r-htz

Also KeyError: 'Resize'

e2r-htz avatar Sep 10 '20 15:09 e2r-htz

@e2r-htz , found any workaround? Do share if you find one.

san-guy avatar Nov 02 '20 20:11 san-guy

Facing same issue: KeyError: 'min'

anilsathyan7 avatar Dec 11 '20 06:12 anilsathyan7

I have come here to rescue you guys. This bug is because of conflict among version of onnx and the onnx torch use to export. By inspecting the file in onnx, you guys con fine the key here is not match with the newest onnx ops converter. The correct dict now is here:

AVAILABLE_CONVERTERS = {
    'Conv': convert_conv,
    'ConvTranspose': convert_convtranspose,
    'Relu': convert_relu,
    'Elu': convert_elu,
    'LeakyRelu': convert_lrelu,
    'Sigmoid': convert_sigmoid,
    'Tanh': convert_tanh,
    'Selu': convert_selu,
    'Clip': convert_clip,
    'Exp': convert_exp,
    'Log': convert_log,
    'Softmax': convert_softmax,
    'PRelu': convert_prelu,
    'ReduceMax': convert_reduce_max,
    'ReduceSum': convert_reduce_sum,
    'ReduceMean': convert_reduce_mean,
    'Pow': convert_pow,
    'Slice': convert_slice,
    'Squeeze': convert_squeeze,
    'Expand': convert_expand,
    'Sqrt': convert_sqrt,
    'Split': convert_split,
    'Cast': convert_cast,
    'Floor': convert_floor,
    'Identity': convert_identity,
    'ArgMax': convert_argmax,
    'ReduceL2': convert_reduce_l2,
    'Max': convert_max,
    'Min': convert_min,
    'Mean': convert_mean,
    'Div': convert_elementwise_div,
    'Add': convert_elementwise_add,
    'Sum': convert_elementwise_add,
    'Mul': convert_elementwise_mul,
    'Sub': convert_elementwise_sub,
    'Gemm': convert_gemm,
    'MatMul': convert_gemm,
    'Transpose': convert_transpose,
    'Constant': convert_constant,
    'BatchNormalization': convert_batchnorm,
    'InstanceNormalization': convert_instancenorm,
    'Dropout': convert_dropout,
    'LRN': convert_lrn,
    'MaxPool': convert_maxpool,
    'AveragePool': convert_avgpool,
    'GlobalAveragePool': convert_global_avg_pool,
    'Shape': convert_shape,
    'Gather': convert_gather,
    'Unsqueeze': convert_unsqueeze,
    'Concat': convert_concat,
    'Reshape': convert_reshape,
    'Pad': convert_padding,
    'Flatten': convert_flatten,
    'Upsample': convert_upsample,
}

Therefore, you can edit the line in here as the correct node_type in the dict (ie when it returns to min/Resize as Min/Upsample). This can easily be done by editing the source file of onnx2keras. It may also ask you to change the node_params as the Upsample ask for size param as scales. For detail, pls look in here

dtlam26 avatar Jan 15 '21 12:01 dtlam26

For the clip operator, it seems that it support for onnx operator set <=6, where min and max are at the attribute. However, for onnx operator set >= 11, min and max are at inputs, which cause the above KeyError: 'min'

bominn avatar Jan 19 '21 08:01 bominn

Thank you @dtlam26. I did as you said and added 'Resize': convert_upsample to the dict. But, I couldn't change it's node_params. I get the following error

ValueError: The 'size' argument must be a tuple of 2 integers. Received: []

Where do I set the size param? I thought for opset>=11 takes it automatically from the input image.

Update: I was able to convert to Keras and compile it on Edge TPU but the Upsample RESIZE_NEAREST_NEIGHBOR was not mapped on the TPU (after TFLite conversion for running on Coral TPU), saying Operation version not supported. I know this has nothing to do with onnx2keras but it would be great if someone already has any luck with the same.

e2r-htz avatar Feb 10 '21 08:02 e2r-htz

Thank you @dtlam26. I did as you said and added 'Resize': convert_upsample to the dict. But, I couldn't change it's node_params. I get the following error

ValueError: The 'size' argument must be a tuple of 2 integers. Received: []

Where do I set the size param? I thought for opset>=11 takes it automatically from the input image.

Update: I was able to convert to Keras and compile it on Edge TPU but the Upsample RESIZE_NEAREST_NEIGHBOR was not mapped on the TPU (after TFLite conversion for running on Coral TPU), saying Operation version not supported. I know this has nothing to do with onnx2keras but it would be great if someone already has any luck with the same.

Sorry for informing you this, but for now, Upsampling layer to Coral can't convert totally to Coral TPU as the risk of losing precision to full int of this layer. You can only force to deal with it by setting up your own quantization through the quantization aware training

dtlam26 avatar Mar 04 '21 07:03 dtlam26

@e2r-htz

Update: I was able to convert to Keras and compile it on Edge TPU but the Upsample RESIZE_NEAREST_NEIGHBOR was not mapped on the TPU (after TFLite conversion for running on Coral TPU), saying Operation version not supported. I know this has nothing to do with onnx2keras but it would be great if someone already has any luck with the same.

Can you please share how you were able to solve this?

greysou1 avatar May 23 '21 14:05 greysou1

@dtlam26 when I convert the onnx model to the keras model I did as you said and added 'Resize': convert_upsample to the dict ValueError: The size argument must be a tuple of 2 integers. Received: []。How can I resovle it。

APeiZou avatar Jul 22 '21 03:07 APeiZou

@dtlam26 when I convert the onnx model to the keras model I did as you said and added 'Resize': convert_upsample to the dict ValueError: The size argument must be a tuple of 2 integers. Received: []。How can I resovle it。

As I said, depends on version of your keras, the size arguments may be exchanged with the scale arguments in this line. For your sake, you can print out all the params keys and map them correctly by yourself. I know this is tricky, but that is a way to overcome.

dtlam26 avatar Aug 13 '21 10:08 dtlam26

may be exchanged with the

thank for your reply but exchanging size by scale did not solve the issue !

onkarkris avatar Oct 21 '21 06:10 onkarkris

@bominn, can you explain where can I make the changes that you suggest to avoid Key Error: 'min' ?

Pguhan avatar Mar 17 '22 00:03 Pguhan

I was able to solve the problem. The convert_clip() expects an key min inside the params. But current ONNX, the min and max value are passed as inputs (not as attribute).

So, we can add the params["min"] and params["max"] before they are called.

  1. Open operation_layers.py file. (may be located at .../envs/.../lib/python3.9/site-packages/onnx2keras/operation_layers.py. or use VS Code navigator)
  2. In the convert_clip() method, add these following lines at the beginning of the convert_clip() method
def convert_clip(node, params, layers, lambda_func, node_name, keras_name):
    if len(node.input) == 3:
        params["min"] = ensure_numpy_type(layers[node.input[1]]).astype(int)
        params["max"] = ensure_numpy_type(layers[node.input[2]]).astype(int)
    else:
        # you can raise Exception here to make sure the above assignments are happening always.
        pass

uzzal-podder avatar May 18 '22 04:05 uzzal-podder

For people with the resize problem, changing this line

from scale = np.uint8(layers[node.input[1]][-2:]) to scale = np.uint8(layers[node.input[-1]][-2:])

solved it for me.

Generally speaking how to maybe solve this problem for other operators: Visualise your onnx model in Netron to get the node number of the params that the layer needs and then debug with pycharm or something else into the code to see how you can use this information. Not an expert of onnx, but apparently parameters for certain layers are also stored as nodes, like in the resize case for the parameters how much the layer is supposed to upscale.

JannisWolf avatar Jun 14 '22 11:06 JannisWolf

I was able to solve the problem. The convert_clip() expects an key min inside the params. But current ONNX, the min and max value are passed as inputs (not as attribute).

So, we can add the params["min"] and params["max"] before they are called.

1. Open `operation_layers.py` file. (may be located at `.../envs/.../lib/python3.9/site-packages/onnx2keras/operation_layers.py`. or use VS Code navigator)

2. In the `convert_clip()` method, add these following lines at the **beginning of the convert_clip() method**
def convert_clip(node, params, layers, lambda_func, node_name, keras_name):
    if len(node.input) == 3:
        params["min"] = ensure_numpy_type(layers[node.input[1]]).astype(int)
        params["max"] = ensure_numpy_type(layers[node.input[2]]).astype(int)
    else:
        # you can raise Exception here to make sure the above assignments are happening always.
        pass

Thank you @uzzal-podder, your solution worked on my side. As you said the error comes from the fact that convert_clip() expects min and max as attributes, but receives it as inputs. This is visible in the following figures produced with netron.app. I exported a torch model to onnx with torch.onnx.export. On the left-hand side it uses opset_version=7 and on the right-hand side opset_version=11.

diff_op_onnx2keras

ngazagna avatar Apr 04 '23 12:04 ngazagna