Leonardo Apolonio

Results 21 comments of Leonardo Apolonio

@ronalddas yeah I think you can take it out. i think that the order of predictions in a batch is stable. You can test locally and verify.

@tomaryancit can you create a pull request? it's hard to figure out what's going on in the code you posted. general suggestions: how is the performance? during training is it...

that's what the model input function specifies but depending on the mode it might not be used

@AndrewMcDowell it looks like your features_to_input, receiver_tensors don't match. check out https://stackoverflow.com/questions/53410469/tensorflow-estimator-servinginputreceiver-features-vs-receiver-tensors-when-and Your concrete next steps are checking that parse_example returns a SparseTensor

@pcnfernando you don't need to use parse_example refer to build_raw_serving_input_receiver_fn above

@pcnfernando you have to tokenize the string first. which turns the words into numbers. then those numbers are fed into the model.

Just use: https://blog.tensorflow.org/2020/03/part-1-fast-scalable-and-accurate-nlp-tensorflow-deploying-bert.html with the corresponding notebook. https://colab.research.google.com/github/tensorflow/workshops/blob/master/blog/TFX_Pipeline_for_Bert_Preprocessing.ipynb

https://codepen.io/lapolonio/pen/wmNgKK?editors=1111

I forked the repo https://github.com/lapolonio/generating-reviews-discovering-sentiment I used html for the heatmap.

Look at generate_html.py