coremltools
coremltools copied to clipboard
tf.random.uniform is optimized out when it shouldn't be
🐞Describing the bug
When making a test model that uses a tf.function with tf.random.uniform, it looks like tf.random.uniform is optimized out of the network. It shouldn't be.
Stack Trace
This stack trace appears when exiting the program. It may be related.
Exception ignored in: <function AtomicFunction.__del__ at 0x15752b5b0>
Traceback (most recent call last):
File "python3.10/site-packages/tensorflow/python/eager/polymorphic_function/atomic_function.py", line 291, in __del__
TypeError: 'NoneType' object is not subscriptable
To Reproduce
import coremltools as ct
import tensorflow as tf
import numpy as np
dummy = [
tf.function(
lambda x: tf.random.uniform((1, 80)),
input_signature=[tf.TensorSpec(shape=[1, 2, 1290, 513], dtype=tf.float32)],
).get_concrete_function()
]
model = ct.convert(
dummy,
convert_to="neuralnetwork",
)
print(model.predict({"x": np.ones((1,2,1290,513), dtype=np.float32)}))
print(model.predict({"x": np.ones((1,2,1290,513), dtype=np.float32)}))
The graphviz output from debug shoes that the random op is not preserved and stdout shows identical outputs for both predict calls
System environment (please complete the following information):
- coremltools version: Both 7.1 and 7.2
- OS (e.g. MacOS version or Linux type): MacOS
- Any other relevant version information (e.g. PyTorch or TensorFlow version): I tried this with TF 2.15.0 and 2.16.1
Small update: If I delete delete_unnecessary_constant_nodes from tfssa_passes, then I start seeing different outputs upon successive predicts again.
This is indeed a bug. Looking at model.get_spec(), the return values are coming from an identity layer. This is also an issue for convert_to="mlprogram".
I've hit this issue too with tf.random.categorical. The outputs of the exported mlprogram are deterministic, and the mlmodel file doesn't seem to have any random ops.