keras-nlp
keras-nlp copied to clipboard
Documentation Request: How to predict with "Separate preprocessing from the same preset"
Is your feature request related to a problem? Please describe. Documentation Request Please update: https://github.com/keras-team/keras-io/blob/master/guides/keras_nlp/getting_started.py
Describe the solution you'd like I would like to call my custom preprocessed model.
In the example given, I execute the following code:
preprocessor = keras_nlp.models.BertPreprocessor.from_preset(
"bert_tiny_en_uncased",
sequence_length=512,
)
# Apply the preprocessor to every sample of train and test data using `map()`.
# `tf.data.AUTOTUNE` and `prefetch()` are options to tune performance, see
# https://www.tensorflow.org/guide/data_performance for details.
imdb_train_cached = (
imdb_train.map(preprocessor, tf.data.AUTOTUNE).cache().prefetch(tf.data.AUTOTUNE)
)
imdb_test_cached = (
imdb_test.map(preprocessor, tf.data.AUTOTUNE).cache().prefetch(tf.data.AUTOTUNE)
)
classifier = keras_nlp.models.BertClassifier.from_preset(
"bert_tiny_en_uncased", preprocessor=None, num_classes=2
)
classifier.fit(
imdb_train_cached,
validation_data=imdb_test_cached,
epochs=3,
)
I get a trained model:
Downloading data from https://storage.googleapis.com/keras-nlp/models/bert_tiny_en_uncased/v1/vocab.txt
231508/231508 [==============================] - 0s 2us/step
Downloading data from https://storage.googleapis.com/keras-nlp/models/bert_tiny_en_uncased/v1/model.h5
17602216/17602216 [==============================] - 2s 0us/step
Epoch 1/3
1563/1563 [==============================] - 330s 198ms/step - loss: 0.4165 - sparse_categorical_accuracy: 0.8064 - val_loss: 0.3525 - val_sparse_categorical_accuracy: 0.8452
Epoch 2/3
1563/1563 [==============================] - 274s 175ms/step - loss: 0.2653 - sparse_categorical_accuracy: 0.8927 - val_loss: 0.3167 - val_sparse_categorical_accuracy: 0.8683
Epoch 3/3
1563/1563 [==============================] - 278s 178ms/step - loss: 0.1976 - sparse_categorical_accuracy: 0.9257 - val_loss: 0.3445 - val_sparse_categorical_accuracy: 0.8663
<keras.src.callbacks.History at 0x78a5f5b4c130>
I would like to predict with my new model:
classifier.predict(["I love modular workflows in keras-nlp!"])
and I get an error: "*Layer "bert_classifier" expects 3 input(s), but it received 1 input tensors. *"
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-6-3ebc2f7f2482>](https://localhost:8080/#) in <cell line: 1>()
----> 1 classifier.predict(["I love modular workflows in keras-nlp!"])
3 frames
[/usr/local/lib/python3.10/dist-packages/keras/src/engine/training.py](https://localhost:8080/#) in tf__run_step(data)
9 do_return = False
10 retval_ = ag__.UndefinedReturnValue()
---> 11 outputs = ag__.converted_call(ag__.ld(model).predict_step, (ag__.ld(data),), None, fscope)
12 with ag__.ld(tf).control_dependencies(ag__.ld(_minimum_control_deps)(ag__.ld(outputs))):
13 ag__.converted_call(ag__.ld(model)._predict_counter.assign_add, (1,), None, fscope)
ValueError: in user code:
File "/usr/local/lib/python3.10/dist-packages/keras/src/engine/training.py", line 2341, in predict_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.10/dist-packages/keras/src/engine/training.py", line 2315, in run_step *
outputs = model.predict_step(data)
File "/usr/local/lib/python3.10/dist-packages/keras/src/engine/training.py", line 2283, in predict_step **
return self(x, training=False)
File "/usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.10/dist-packages/keras/src/engine/input_spec.py", line 219, in assert_input_compatibility
raise ValueError(
ValueError: Layer "bert_classifier" expects 3 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'data:0' shape=(None,) dtype=string>]
I would appreciate a quick code block or documentation block (could be a "prologue") to demonstrate how to call this model.
Thanks!