tensorflow-onnx
tensorflow-onnx copied to clipboard
Bug with conversion of tf function with no inputs
Describe the bug
Conversion of a tf.function fails (convert.from_function throws unexpected exception here) in case the graph has no any inputs (i.e. when signature is an empty list). It seems that a model with no inputs is essentially a correct use-case, so the suggestion is to replace existing check with something like:
if input_signature is None:
System information
- Ubuntu 20.04.3 LTS
- Tensorflow 2.8.0:
- Python 3.8.10 [GCC 9.3.0] on linux
To Reproduce
import tensorflow as tf
import tf2onnx
def no_inputs_graph():
return tf.constant(1.0, dtype=tf.float32)
tf_func = tf.function(func=no_inputs_graph, input_signature=[])
class ProxyModule(tf.Module):
def __init__(self, tf_func):
super().__init__()
self.apply = tf_func
SAVED_MODEL_PATH = "graph.SavedModel"
proxy = ProxyModule(tf_func)
tf.saved_model.save(proxy, SAVED_MODEL_PATH, signatures={"apply": proxy.apply})
restored = tf.saved_model.load(SAVED_MODEL_PATH)
print("Tensorflow inference result: ", restored.apply())
onnx_proto, _ = tf2onnx.convert.from_function(function=tf_func, input_signature=[], output_path=None, opset=14)
Additional context The example above generates the next output:
INFO:tensorflow:Assets written to: graph.SavedModel/assets
Tensorflow inference result: tf.Tensor(1.0, shape=(), dtype=float32)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [10], in <module>
21 restored = tf.saved_model.load(SAVED_MODEL_PATH)
22 print("Tensorflow inference result: ", restored.apply())
---> 24 onnx_proto, _ = tf2onnx.convert.from_function(function=tf_func, input_signature=[], output_path=None, opset=14)
File ~/tf2onnx_patches/venv/lib/python3.8/site-packages/tf2onnx-1.10.0-py3.8.egg/tf2onnx/convert.py:535, in from_function(function, input_signature, opset, custom_ops, custom_op_handlers, custom_rewriter, inputs_as_nchw, extra_opset, shape_override, target, large_model, output_path)
532 raise NotImplementedError("from_function requires tf-2.0 or newer")
534 if not input_signature:
--> 535 raise ValueError("from_function requires input_signature")
537 concrete_func = function.get_concrete_function(*input_signature)
539 input_names = [input_tensor.name for input_tensor in concrete_func.inputs
540 if input_tensor.dtype != tf.dtypes.resource]
ValueError: from_function requires input_signature
@iolkhovsky ,
Could you please help me to understand the meaning of a model without any inputs? In such case, what's the expected meaning of inference output?
@fatcat-z Hi and thanks for your response.
I understand that from the first look my suggestion may sound a little bit strange, but let me clarify the idea by the next points:
- I consider tf2onnx as a tool intended to convert any abstract TF computational graph into ONNX representation (if the first one is formally valid). Trivial graph without inputs (such as one constant node or random node) is essentially valid, so it should be correctly handled by the tool.
- tf2onnx doesn't need any essential changes in the code to handle the scenario introduced by me. It handles such graphs already, we only need to update the mentioned check of passed argument in more explicit way.
- The check being discussed right now anyway looks not quite correct. The default value of the argument is None and considering error message ("from_function requires input_signature") the check should implement logic like "did the user pass anything?". If so, we need to know if the argument still None or not. Otherwise it makes sense to check the type of the passed argument (list/tuple/.. ), because passing non-zero number (or anything with implemented bool method) may be incorrectly considered as valid input_signature. I believe that the first option (to check if it's not None) is the best going back to the previous paragraph.
- I can give an example of neural network without inputs - any generative model (with generation of initial state within the model)
I hope I have explained my point clearly. Please let me know if I can help in any way. Thanks again!
^ In addition, this causes problems when attempting to convert a function that accepts an input shape with more than one value: onnx_proto, _ = tf2onnx.convert.from_function(control_model_function, input_signature=np.asarray([[1,2]])) should be correct (where control_model_function is a tf.Function), but because of this check (which checks if the value is truthy or falsy instead of checking if the value is None), this function returns ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all().
@fatcat-z Could you take a look on the discussion again?
^ In addition, this causes problems when attempting to convert a function that accepts an input shape with more than one value:
onnx_proto, _ = tf2onnx.convert.from_function(control_model_function, input_signature=np.asarray([[1,2]]))should be correct (where control_model_function is atf.Function), but because of this check (which checks if the value is truthy or falsy instead of checking if the value is None), this function returnsValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all().
The input_signature is designed to be a list of tf.TensorSpec objects specifying the inputs to the model, so a list of tf.TensorSpec is recommanded for this. But will still update code to allow your case happen.
@fatcat-z Jay Zhang FTE Hi and thanks for your response.
I understand that from the first look my suggestion may sound a little bit strange, but let me clarify the idea by the next points:
- I consider tf2onnx as a tool intended to convert any abstract TF computational graph into ONNX representation (if the first one is formally valid). Trivial graph without inputs (such as one constant node or random node) is essentially valid, so it should be correctly handled by the tool.
- tf2onnx doesn't need any essential changes in the code to handle the scenario introduced by me. It handles such graphs already, we only need to update the mentioned check of passed argument in more explicit way.
- The check being discussed right now anyway looks not quite correct. The default value of the argument is None and considering error message ("from_function requires input_signature") the check should implement logic like "did the user pass anything?". If so, we need to know if the argument still None or not. Otherwise it makes sense to check the type of the passed argument (list/tuple/.. ), because passing non-zero number (or anything with implemented bool method) may be incorrectly considered as valid input_signature. I believe that the first option (to check if it's not None) is the best going back to the previous paragraph.
- I can give an example of neural network without inputs - any generative model (with generation of initial state within the model)
I hope I have explained my point clearly. Please let me know if I can help in any way. Thanks again!
Agree with you that the check should implement logic like "did the user pass anything?". The current design in tf2onnx is: the final optimizers will eliminate all of nodes but leave an output only if there are no inputs to this function to be converted. So even the check is updated, probably the conversion won't help you too much because the final result will contain only 1 output node.
^ In addition, this causes problems when attempting to convert a function that accepts an input shape with more than one value:
onnx_proto, _ = tf2onnx.convert.from_function(control_model_function, input_signature=np.asarray([[1,2]]))should be correct (where control_model_function is atf.Function), but because of this check (which checks if the value is truthy or falsy instead of checking if the value is None), this function returnsValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all().
The PR has addressed this issue and please wait for next release. And you can also install tf2onnx from source for try.