text
text copied to clipboard
Fix three errors of the tutorial 'Image captioning with visual attention'
-
To align the same shape with image features model = Captioner(tokenizer, feature_extractor=mobilenet, output_layer=output_layer, units=256, dropout_rate=0.5, num_layers=2, num_heads=2) --> model = Captioner(tokenizer, feature_extractor=mobilenet, output_layer=output_layer, units=576, dropout_rate=0.5, num_layers=2, num_heads=2)
-
Utilize the
set_modelmethod because property 'model' of 'GenerateText' object has no setter g.model = model --> g.set_model(model) -
labelsmust have the dtype of int32 or int64. loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels, preds) --> labels = tf.cast(labels, tf.int64) loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels, preds)
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
View this failed invocation of the CLA check for more information.
For the most up to date status, view the checks section at the bottom of the pull request.
Check out this pull request on ![]()
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB