nlg-eval icon indicating copy to clipboard operation
nlg-eval copied to clipboard

Evaluation code for various unsupervised automated metrics for Natural Language Generation.

Results 35 nlg-eval issues
Sort by recently updated
recently updated
newest added

Hello, I have a problem with the way that "object oriented API for repeated calls in a script - multiple examples" works. To my understandings, I have to have a...

PR for feature addition related to issue [129](https://github.com/Maluuba/nlg-eval/issues/129). **Motivation:** In Summarization, Generation, and Style Transfer Tasks, it is also useful to check the Fluency/ Coherence of the generated outputs (references...

Hello, why do I only output Bleu when I use it on Mac ?

I've added SPICE metric from coco-caption and also updated setup. SPICE is working in python3. Needs double check.

Right after installing and setting up nlg-eval with success, I did the following test: ``` from nlgeval import NLGEval nlgeval = NLGEval() # loads the models metrics_dict = nlgeval.compute_individual_metrics(['this is...

When calling model instantiation, nlgeval = NLGEval() I get the below error. Appreciate any help on this issue. ``` Traceback (most recent call last): File "eval.py", line 29, in nlgeval...

WARNING: Discarding git+https://github.com/Maluuba/nlg-eval.git@master. Command errored out with exit status 128: git clone -q https://github.com/Maluuba/nlg-eval.git /tmp/pip-req-build-m76jpr_q Check the logs for full command output. ERROR: Command errored out with exit status 128:...

In line 37 below, shouldn't it be "_this is one reference sentence for sentence2_". Instead, it says this is a reference sentence for **_sentence2_** which was _**generated by your model**_....

Hey, I've wanted to test the code on Google colab, but the code throws an **error** Here's the code I'm trying to run: ```Python from nlgeval import compute_metrics metrics_dict =...

The meteor don't have the chinese word data, and the meteor mainpage don't have too. whether it's not proper to evaluate chinese nlg task using meteor ?