onnxmltools icon indicating copy to clipboard operation
onnxmltools copied to clipboard

Cast INT64 to FLOAT(16)

Open Aeroxander opened this issue 5 years ago • 4 comments

So I'm trying to cast an INT64 to FLOAT16 with the help of the float to float16 converter.. Here is the code I came up with right now

While Netron does display all the types as Float16, I still get this error when converting the ONNX model to Tensorflow:

Tensor conversion requested dtype float32 for Tensor with dtype int64: 'Tensor("Mul_3:0", shape=(), dtype=int64)'

Maybe there lies another issue here but casting the int64 to float would be the easiest solution, but if anyone knows another solution that'd be greatly appreciated!!

Aeroxander avatar Jun 09 '19 22:06 Aeroxander

Thanks for trying a new conversion, @Aeroxander! I'd encourage you to contribute to the community tools in onnxmltools once you've gotten it working.

@stevenlix, any insight here?

vinitra-zz avatar Jun 12 '19 21:06 vinitra-zz

Thanks for trying a new conversion, @Aeroxander! I'd encourage you to contribute to the community tools in onnxmltools once you've gotten it working.

@stevenlix, any insight here?

I got the following code but when I convert the ONNX model to Tensorflow it still acts like it is an INT64, although Netron says it's a float16, but I think this is due to the data still being in INT64 but I only managed to change they dtype parameter to float64 without changing the data itself.

image

            int_list = _npfloat16_to_int(np.float16(tensor.float_data))
            tensor.int32_data[:] = int_list
            tensor.float_data[:] = []
        if tensor.int64_data:
            int_list = _npfloat16_to_int(tensor.int64_data.astype(np.float16)) # np.int32(tensor.int64_data).astype(numpy.float32)
            tensor.int32_data[:] = int_list
            tensor.int64_data[:] = []
        # convert raw_data (bytes type)
        if tensor.raw_data:
            # convert n.raw_data to float
                    if n.type.tensor_type.elem_type == onnx_proto.TensorProto.FLOAT:
                        n.type.tensor_type.elem_type = onnx_proto.TensorProto.FLOAT16
                        value_info_list.append(n)
                    if n.type.tensor_type.elem_type == onnx_proto.TensorProto.INT64:
                        n.type.tensor_type.elem_type = onnx_proto.TensorProto.FLOAT16
                        value_info_list.append(n)
            # if q is node.attribute, process node.attribute.t and node.attribute.tensors (TensorProto)
            if isinstance(q, onnx_proto.AttributeProto):
                for n in itertools.chain(q.t, q.tensors):

So I just need to change the data, but I don't know how to test this correctly.

But I discovered that OpenCV (the framework where I load the ONNX model in) doesn't accept float16 models so I will make another converter that will convert only int64's to float32's!

Aeroxander avatar Jun 12 '19 22:06 Aeroxander

The original float16.py code only picks up float data type and convert it to float16. For your case, you may need to pick up int64 type and do your conversion. The changes you made so far still run on float data type.

stevenlix avatar Jun 12 '19 22:06 stevenlix

@stevenlix Got it, is there something extra I need to do to convert an int to a float? Cause I would think doing the following would have done it:

int_list = _npfloat16_to_int(tensor.int64_data.astype(np.float16)) & if n.type.tensor_type.elem_type == onnx_proto.TensorProto.INT64: n.type.tensor_type.elem_type = onnx_proto.TensorProto.FLOAT16

But this doesn't seem to change the data itself, is int_list not the data? I understand that with changing the elem_type only Netron will show it as a float16, so I would think int_list would change the data..

Aeroxander avatar Jun 12 '19 23:06 Aeroxander