CoreML-Custom-Layers
CoreML-Custom-Layers copied to clipboard
Only channel and sequence concatenation are supported.
Hello again:) I truly appreciate your help and am very sorry taking your time. your input toward this error will be very helpful to me.
the error raises here while it wanted to concatenate:
rnn2_merged = concatenate([rnn_2, rnn_2b])
I was wondering do you have any solution for this or any workaround or idea. I was searching the whole day yesterday but really I could not find any useful link regarding this error how to figure it out.
this is the error:
Traceback (most recent call last):
File "/home/sgnbx/Downloads/projects/CRNN-with-STN-master/CRNN_with_STN.py", line 201, in <module>
custom_conversion_functions={"Lambda": convert_lambda},
File "/home/sgnbx/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/coremltools/converters/keras/_keras_converter.py", line 760, in convert
custom_conversion_functions=custom_conversion_functions)
File "/home/sgnbx/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/coremltools/converters/keras/_keras_converter.py", line 556, in convertToSpec
custom_objects=custom_objects)
File "/home/sgnbx/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/coremltools/converters/keras/_keras2_converter.py", line 359, in _convert
converter_func(builder, layer, input_names, output_names, keras_layer)
File "/home/sgnbx/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/coremltools/converters/keras/_layers2.py", line 636, in convert_merge
mode = _get_elementwise_name_from_keras_layer(keras_layer)
File "/home/sgnbx/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/coremltools/converters/keras/_layers2.py", line 92, in _get_elementwise_name_from_keras_layer
raise ValueError('Only channel and sequence concatenation are supported.')
ValueError: Only channel and sequence concatenation are supported.
28 : concatenate_1, <keras.layers.merge.Concatenate object at 0x7f237c23b908>
Again thank you very much for taking the time Its much appreciated.
Seems like CoreML does not support this. What is the output of these rnn layers like? Perhaps it can be reshaped before the concatenation.
Thank you so much again. I thought maybe there is a trick for that. I was searching for a code for OCR in Keras which does not have concatenate, but I can say still I could not find it.
So the shape for
rnn_2 = (?, ?, 128)
rnn_2b = (?, ?, 128)
rnn2_merged= (?, ?, 256)
Really Thanks:)
But what do the ? represent here?
I think it means the shape can vary between several run calls and it not fixed in the graph as this is sequential data. here
Thanks.
The issue here is that Core ML puts things in a different order than Keras. I think it might work is you change the shape to (?, ?, 128, 1, 1). Those extra 1, 1) is used to trick Core ML into doing the right thing. However, I’m not sure where to make this change. Probably needs to happen inside coremltools itself.
I tried your idea.
rnn_2 = (?, ?, 128, 1, 1)
rnn_2b = (?, ?, 128, 1, 1)
but when concatenate them and get shape of the concatenated arrray it is like this:
rnn2_merged =(?, ?, 128, 1, 2)
it means it did not concatenate correctly.
I also did another way, not changing the shame of rnn_2 and rnn_2b
, but changing the result of the merged one.
I mean nn2_merged = (?, ?, 256, 1, 1)
but here in this case, it could not make the model and raises this error:
Traceback (most recent call last):
File "/home/sgnbx/Downloads/projects/CRNN-with-STN-master/CRNN_with_STN.py", line 145, in <module>
base_model = Model(inputs=inputShape, outputs=fc_2) # the model for prediecting
File "/home/sgnbx/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/home/sgnbx/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/keras/engine/network.py", line 93, in __init__
self._init_graph_network(*args, **kwargs)
File "/home/sgnbx/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/keras/engine/network.py", line 231, in _init_graph_network
self.inputs, self.outputs)
File "/home/sgnbx/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/keras/engine/network.py", line 1366, in _map_graph_network
tensor_index=tensor_index)
File "/home/sgnbx/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/keras/engine/network.py", line 1353, in build_map
node_index, tensor_index)
File "/home/sgnbx/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/keras/engine/network.py", line 1353, in build_map
node_index, tensor_index)
File "/home/sgnbx/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/keras/engine/network.py", line 1325, in build_map
node = layer._inbound_nodes[node_index]
AttributeError: 'NoneType' object has no attribute '_inbound_nodes'
You would need to tell Keras to concat on axis=2 instead of the last one.
I just tried it so the shape is
nn2_merged = (?, ?, 256, 1, 1)
, but the same error as above raises.
I am really sorry about taking your time.
there is good explanations here, that the author of the model change his model to be ok to use in CoreMl tools. But I am not able to get what changes I can do. https://github.com/iwantooxxoox/Keras-OpenFace/issues/1
I found a workaround which uses a trick here https://qiita.com/Mco7777/items/a0e2c3d75874a0052743
r2= rnn_2[...,np.newaxis]
rnn2_merged = concatenate([rnn_2, rnn_2b])
Model_input = [rnn_2,rnn_2b]
permuted = [Permute((1,3,2))(Reshape((r2))(_i)) for _i in Model_input]
concate = concatenate(axis=3)(permuted)
permuted2 = Reshape((-1,-1,256))(Permute((1,3,2))(concate))
rnn2_merged = Model(inputs=rnn2_merged,outputs=permuted2)
``` I just have problem in the for loop. it raises `TypeError: Tensor objects are only iterable when eager execution is enabled. To iterate over this tensor use tf.map_fn.`
do you have any idea of this
I change the above code permuted = [Permute((1,3,2))(Reshape((r2))(_i)) for _i in Model_input]
to be permuted = [Permute((1,3,2))(Reshape((-1,-1,128,1))(_i)) for _i in Model_input]
Now it is complaining about -1 dimension
ValueError: Can only specify one unknown dimension.
If I could resolve this part I think it can work, but Im not sure there is a trick for the Reshape here!
Looks like you can't use -1, -1 there. You will have to put the actual dimension size in place of at least one of these -1s.
Well, I think I can not know the exact dimension beforehand as it is supposed to have variable length input for the LSTM cell.
Finally, I converted the model, I will later share my solution so it may be helpful for someone else. Now, I have a question that in the mobile part it does not raises any error but also do not have any result. Can you please let me know something which I may have missed?
For those that are still looking this up in the future. I managed to get it working by reshaping my tensors, concatenating them along the coreml-compliant dimension (in my case, sequence) and then reshaping them back. However I did have to replace the "None" dimension with 1's. This wasn't a problem for my case because I have to pass samples into my recurrent coreml model one-by-one (to get around some other coreml shortcomings) and re-passing in the gru state each time. Nonetheless, I tried and failed to figure out a way to fix the None business.
decoder_input = Input(shape=(1,1536,))
decoder_input_reshape = Reshape((1536,1,))(decoder_input)
motion_input = Input(shape=(1,6,))
motion_input_reshape = Reshape((6,1,))(motion_input)
gru_input = Concatenate(axis=1)([decoder_input_reshape, motion_input_reshape])
gru_input_reshape = Reshape((1,1542))(gru_input)