tensorflow-onnx
tensorflow-onnx copied to clipboard
KeyError: 'SimpleMLCreateModelResource'
Ask a Question
Question
I'm trying to convert a TF Random Forest model. Has this been done?
Further information
I'm getting the following error, what does it mean? I'm willing to put work into missing functionality.
ralf@ark:~/models> /opt/python311/bin/python3 -m tf2onnx.convert --saved-model RF-EXP.arch/ --output RF-EXP.onnx
2023-07-31 18:24:48.336799: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-07-31 18:24:48.379378: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-07-31 18:24:48.379799: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-07-31 18:24:48.958780: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
<frozen runpy>:128: RuntimeWarning: 'tf2onnx.convert' found in sys.modules after import of package 'tf2onnx', but prior to execution of 'tf2onnx.convert'; this may result in unpredictable behaviour
2023-07-31 18:24:49,989 - WARNING - '--tag' not specified for saved_model. Using --tag serve
Traceback (most recent call last):
File "/home/ralf/.local/lib/python3.11/site-packages/tensorflow/python/framework/ops.py", line 4215, in _get_op_def
return self._op_def_cache[type]
~~~~~~~~~~~~~~~~~~^^^^^^
KeyError: 'SimpleMLCreateModelResource'
The same error comes up when I go through the beginner colab notebook at https://www.tensorflow.org/decision_forests/tutorials/beginner_colab and try to convert the saved model that was written by the line model_1.save("/tmp/my_saved_model")
.
2023-08-01 12:42:11.845036: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-08-01 12:42:11.885165: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-08-01 12:42:11.885543: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-08-01 12:42:12.460275: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
<frozen runpy>:128: RuntimeWarning: 'tf2onnx.convert' found in sys.modules after import of package 'tf2onnx', but prior to execution of 'tf2onnx.convert'; this may result in unpredictable behaviour
2023-08-01 12:42:13,478 - WARNING - '--tag' not specified for saved_model. Using --tag serve
Traceback (most recent call last):
File "/home/ralf/.local/lib/python3.11/site-packages/tensorflow/python/framework/ops.py", line 4215, in _get_op_def
return self._op_def_cache[type]
~~~~~~~~~~~~~~~~~~^^^^^^
KeyError: 'SimpleMLCreateModelResource'
See also
- https://stackoverflow.com/questions/75163777/tensorflowjs-typeerror-unknown-op-simplemlcreatemodelresource
- https://stackoverflow.com/questions/75273668/error-while-convering-tensor-flow-decision-tree-model-into-tflite-model
- https://blog.ml6.eu/serving-decision-forests-with-tensorflow-b447ea4fc81c
Hi, TF-DF author here: Tensorflow Decision Forests uses Tensorflow "Custom Ops" which are incompatible with some parts of the TF ecosystem. One of them is SimpleMLCreateModelResource
, which explains the error you're seeing. We're working on integrating the OP in other parts of the ecosystem, but some things are still missing: For instance, TF-DF is now compatible with Tensorflow Serving and Tensorflow JS, but not yet with TF lite and ONNX.
I'm not familiar with conversion to ONNX, but our team is interested in making this work. If anyone could provide guidance on how much work it is to convert a Decision Forest model to ONNX, I'd be very happy. Since TF-DF is a wrapper around Yggdrasil Decision Forests (YDF), a pure C++ Decision Forests library, I could see two high-level directions for this: Either we convert the YDF model directly, or we convert the TF-DF custom Tensorflow ops. If anyone has guidance on how to attack this issue or could provide help with the implementation (@rwst ?), that would be amazing, since our team's bandwidth is quite limited.
I don't have any information on this, I'm sorry. My angle is I'm in need of a way to predict labels in a Kotlin/Desktop app, using a model I already trained using TF-DF. So, the idea was to convert the model to ONNX and use kinference. However, in the meantime, I find the easiest would be to call YDF-cli from Kotlin (as there is yet no Kotlin/C++ interoperability.
Looks like tensorflow can't retrieve this op successfully when we are trying to freeze the tf graph for conversion. At this moment, tf2onnx can't help on this.