DialogEntailment icon indicating copy to clipboard operation
DialogEntailment copied to clipboard

How to use BERT to evaluate directly to my conversation?

Open dxlong2000 opened this issue 2 years ago • 8 comments

Hi @ehsk , @korymath ,

Thanks for your great work. May I ask if there is any inference scripts so that I can run them to evaluate my generated dialog? Look forward hearing from you soon.

Thanks!

dxlong2000 avatar Aug 02 '22 08:08 dxlong2000

Hi @dxlong2000,

Unfortunately, we don't have our fine-tuned models anymore. You need to fine-tune BERT yourself first.

Hope this helps!

ehsk avatar Aug 02 '22 16:08 ehsk

Thanks for your reply. I see. Could you mind uploading the inference codes? Like how to load and evaluate a new dialog? Thanks

dxlong2000 avatar Aug 03 '22 03:08 dxlong2000

Our code supports evaluation. You can find it here. We didn't implement inference where we save the predicted labels for an input data, but it would be quite similar to the evaluation code.

ehsk avatar Aug 03 '22 16:08 ehsk

Hi @ehsk ,

Your evaluation code only reports the eval_accuracy, eval_loss, global_step, and loss. May I ask how can I get the SS scores? Look forward hearing from you soon.

Thanks!

dxlong2000 avatar Aug 10 '22 15:08 dxlong2000

Hi @dxlong2000,

For Semantic Similarity, take a look at here. You need to write a code like the following:

from dialogentail.semantic_similarity import SemanticSimilarity

ss = SemanticSimilarity()
ss.compute(conversation_history, actual_response, generated_response)

conversation_history is a list of strings and actual_response and generated_response are both strings.

ehsk avatar Aug 10 '22 23:08 ehsk

Hi @ehsk ,

Thanks for your quick response. From my understanding is that let's say the entailment model is trained on the ground-truth response and then we can take that pre-trained model to evaluate on a new conversation without knowing the actual_response, am I correct?

I still see in the computation of ss includes actual_response. In the paper I saw: It measures the distance between the generated response and the utterances in the conversation history but there is no mention of theactual_response. Could you mind clarifying for me?

Thanks a lot!

dxlong2000 avatar Aug 11 '22 01:08 dxlong2000

I saw you already provided sim_generated_resp. That answered my right above question. Is there any way I can load my above pretrained BERT instead of Elmo?

dxlong2000 avatar Aug 11 '22 02:08 dxlong2000

Semantic Similarity measures cosine similarity between embedding vectors. An updated version of it would be BERTScore. actual_response is not really necessary. You can pass the same string as generated_response.

If you want to use an entailment model, the coherence metric, here, is what you need:

from dialogentail.coherence import BertCoherence

c = BertCoherence("/path/to/model")
c.compute(conversation_history, actual_response, generated_response)

The constructor argument is the path to a fine-tuned BERT model.

ehsk avatar Aug 11 '22 14:08 ehsk