mediapipe
mediapipe copied to clipboard
Keras 2 in MacOS
Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
None
OS Platform and Distribution
MacOS 15 (ARM)
Python Version
3.12
MediaPipe Model Maker version
0.1.0.1
Task name (e.g. Image classification, Gesture recognition etc.)
Gesture recognition
Describe the actual behavior
calling
data = gesture_recognizer.Dataset.from_folder( dirname=dataset_path, hparams=gesture_recognizer.HandDataPreprocessingParams() )
returns a ValueError: File format not supported. mediapipe_model_maker/models/gesture_recognizer/gesture_embedder. Keras 3 only supports V3 .keras files and legacy H5 format files (.h5 extension)
Describe the expected behaviour
The chosen folder should be loaded from gesture_recognizer.Dataset
Standalone code/steps you may have used to try to get what you need
dataset_path = "rps_data_sample"
data = gesture_recognizer.Dataset.from_folder( dirname=dataset_path, hparams=gesture_recognizer.HandDataPreprocessingParams() )
Other info / Complete Logs
...
INFO:tensorflow:Loading image ./rps_data_sample/none/744.jpg
INFO:tensorflow:Loading RGB image ./rps_data_sample/none/744.jpg
INFO:tensorflow:Loading image ./rps_data_sample/paper/855.jpg
INFO:tensorflow:Loading RGB image ./rps_data_sample/paper/855.jpg
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[9], line 1
----> 1 data = gesture_recognizer.Dataset.from_folder(
2 dirname=dataset_path,
3 hparams=gesture_recognizer.HandDataPreprocessingParams()
4 )
6 # train_data, rest_data = data.split(0.8)
7 # validation_data, test_data = rest_data.split(0.5)
File ~/venv/lib/python3.12/site-packages/mediapipe_model_maker/python/vision/gesture_recognizer/dataset.py:211, in Dataset.from_folder(cls, dirname, hparams)
206 hand_data_dict = {
207 k: [lm[k] for lm in valid_hand_data] for k in valid_hand_data[0]
208 }
209 hand_ds = tf.data.Dataset.from_tensor_slices(hand_data_dict)
--> 211 embedder_model = model_util.load_keras_model(
212 constants.GESTURE_EMBEDDER_KERAS_MODEL_PATH)
214 hand_ds = hand_ds.batch(batch_size=1)
215 hand_embedding_ds = hand_ds.map(
216 map_func=lambda feature: embedder_model(dict(feature)),
217 num_parallel_calls=tf.data.experimental.AUTOTUNE)
File ~/venv/lib/python3.12/site-packages/mediapipe_model_maker/python/core/utils/model_util.py:65, in load_keras_model(model_path, compile_on_load)
51 """Loads a tensorflow Keras model from file and returns the Keras model.
52
53 Args:
(...) 62 A tensorflow Keras model.
63 """
64 absolute_path = file_util.get_absolute_path(model_path)
---> 65 return tf.keras.models.load_model(
66 absolute_path, custom_objects={'tf': tf}, compile=compile_on_load)
File ~/venv/lib/python3.12/site-packages/keras/src/saving/saving_api.py:206, in load_model(filepath, custom_objects, compile, safe_mode)
200 raise ValueError(
201 f"File not found: filepath={filepath}. "
202 "Please ensure the file is an accessible `.keras` "
203 "zip file."
204 )
205 else:
--> 206 raise ValueError(
207 f"File format not supported: filepath={filepath}. "
208 "Keras 3 only supports V3 `.keras` files and "
209 "legacy H5 format files (`.h5` extension). "
210 "Note that the legacy SavedModel format is not "
211 "supported by `load_model()` in Keras 3. In "
212 "order to reload a TensorFlow SavedModel as an "
213 "inference-only layer in Keras 3, use "
214 "`keras.layers.TFSMLayer("
215 f"{filepath}, call_endpoint='serving_default')` "
216 "(note that your `call_endpoint` "
217 "might have a different name)."
218 )
ValueError: File format not supported: filepath=~/venv/lib/python3.12/site-packages/mediapipe_model_maker/models/gesture_recognizer/gesture_embedder. Keras 3 only supports V3 `.keras` files and legacy H5 format files (`.h5` extension). Note that the legacy SavedModel format is not supported by `load_model()` in Keras 3. In order to reload a TensorFlow SavedModel as an inference-only layer in Keras 3, use `keras.layers.TFSMLayer(~/venv/lib/python3.12/site-packages/mediapipe_model_maker/models/gesture_recognizer/gesture_embedder, call_endpoint='serving_default')` (note that your `call_endpoint` might have a different name).