keras-io
keras-io copied to clipboard
Migrating Evaluating and exporting scikit-learn metrics in a Keras callback example to Keras 3
This PR changes the Evaluating and exporting scikit-learn metrics in a Keras callback example to keras 3.0 [TF Only Backend].
For example, here is the notebook link provided: https://colab.research.google.com/drive/1rG2dhTVwJI4_FV3zgNtaxeq2lF7TN-Y1?usp=sharing
cc: @divyashreepathihalli @fchollet
The following describes the Git difference for the changed files:
Changes:
diff --git a/examples/keras_recipes/sklearn_metric_callbacks.py b/examples/keras_recipes/sklearn_metric_callbacks.py
index d40cda34..9839b35d 100644
--- a/examples/keras_recipes/sklearn_metric_callbacks.py
+++ b/examples/keras_recipes/sklearn_metric_callbacks.py
@@ -33,9 +33,10 @@ import os
os.environ["KERAS_BACKEND"] = "tensorflow"
-import tensorflow as tf
-import keras as keras
+import keras
+from keras import ops
from keras import layers
+import tensorflow as tf
from sklearn.metrics import jaccard_score
import numpy as np
import os
@@ -56,7 +57,7 @@ class JaccardScoreCallback(keras.callbacks.Callback):
self.keras_metric.reset_state()
predictions = self.model.predict(self.x_test)
jaccard_value = jaccard_score(
- np.argmax(predictions, axis=-1), self.y_test, average=None
+ ops.argmax(predictions, axis=-1), self.y_test, average=None
)
self.keras_metric.update_state(jaccard_value)
self._write_metric(
@@ -89,8 +90,8 @@ input_shape = (28, 28, 1)
x_train = x_train.astype("float32") / 255
x_test = x_test.astype("float32") / 255
# Make sure images have shape (28, 28, 1)
-x_train = np.expand_dims(x_train, -1)
-x_test = np.expand_dims(x_test, -1)
+x_train = ops.expand_dims(x_train, -1)
+x_test = ops.expand_dims(x_test, -1)
print("x_train shape:", x_train.shape)
print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")
@@ -120,7 +121,7 @@ epochs = 15
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
callbacks = [
- JaccardScoreCallback(model.name, x_test, np.argmax(y_test, axis=-1), "logs")
+ JaccardScoreCallback(model.name, x_test, ops.argmax(y_test, axis=-1), "logs")
]
model.fit(
x_train,
(END)
The Tensorflow backend worked well. For other backends, validation_split was a problem. So, I tried with validation_data instead but still got an attribute error.
If the changes are all approved in the.py file,. Then I will change the rest of the files accordingly, with updated dates.
For other backends, validation_split was a problem. So, I tried with validation_data instead but still got an attribute error.
What was the error in either case? Can you share a Colab to reproduce?
For other backends, validation_split was a problem. So, I tried with validation_data instead but still got an attribute error.
What was the error in either case? Can you share a Colab to reproduce?
For jax: https://colab.research.google.com/drive/1MziZSUWEuseaQ6DL_6ykaY9jDk4I2WpB?usp=sharing For pytotch: https://colab.research.google.com/drive/1FBDxozi6YzzlUNjgz6seKo-xQxbs06JB?usp=sharing
Both the Colabs show the problem with validation_split.
Thanks for the report. It looks like we only support np arrays and tf tensors for validation_split. This can be fixed easily, actually. Let me just do that in Keras.
Done. You can retry after building Keras from sources.
Done. You can retry after building Keras from sources.
I tried, and the problem of validation split is solved, but errors are still there. For pytorch, TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. For jax, AttributeError: 'ArrayImpl' object has no attribute 'numpy'
You can check the notebooks for errors.
Just replace .numpy() with ops.convert_to_numpy which should work with any tensor.
Just replace
.numpy()withops.convert_to_numpywhich should work with any tensor.
Thanks, it helped the Jax backend. But pytorch is still giving the error TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
Just replace
.numpy()withops.convert_to_numpywhich should work with any tensor.Thanks, it helped the Jax backend. But pytorch is still giving the error
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
I looked over the code again and think the JacardScoreCallback is the cause. Although I attempted, I kept receiving the same error. The only thing causing issues is the torch backend. So, what ought to I do with this PR? Should I point out that the PR currently supports both Jax and TensorFlow backends?
cc: @fchollet
What's the complete stack trace?