SummEval icon indicating copy to clipboard operation
SummEval copied to clipboard

Resources for the "SummEval: Re-evaluating Summarization Evaluation" paper

Results 30 SummEval issues
Sort by recently updated
recently updated
newest added

since in the BertScore paper, the evaluation result differs w/o idf weighted averaging, I'm wondering whether it's related to which kind of BertScore (or sadly both )

![meteor_error](https://user-images.githubusercontent.com/59523992/176444676-6dec19ed-d56f-455a-b531-669b26e937e9.PNG)

**Error:** ModuleNotFoundError: No module named 'sentence_transformers' In call to configurable 'SupertMetric' () **Version:** 0.891 **Description** Tried the new summ_eval package but still getting this issue in the reference-free evaluation metric....

Error message: Building wheel for pyemd (setup.py) ... error ERROR: Command errored out with exit status 1: command: 'C:\Users\a-sbrankovic\Anaconda3\envs\env-01\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\a-sbrankovic\\AppData\\Local\\Temp\\pip-install-0hfs4j_s\\pyemd_8ad09ff0ac7849d8b344211e55ed7432\\setup.py'"'"';...

I installed the summ-eval package through pip on a linux machine and tried running the same command shown in the readme: calc-scores --config-file=examples/basic.config --metrics "rouge" --summ-file generated_predictions_sorted.txt --ref-file targetAnswers.txt --output-file...

Hi Team, I installed the summ_eval package using **pip** and also tried to run it locally by **cloning** it. I encountered several issues and here are the fixes that helped...

Hi to all, I', trying to install the package from pip3. But I'm getting the following error: ``` $ pip3 install summ-eval ``` ``` DEPRECATION: Configuring installation scheme with distutils...

The easiest way to reproduce is: ``` from summ_eval.supert_metric import SupertMetric supert = SupertMetric() supert.evaluate_example("the", "this is a document") ``` Problem happens when `kill_stopwords` removes the stopword `the` leaving no...

Dear Author, With @GuillaumeStaermanML we would like to add new metrics results specifically: BaryScore https://arxiv.org/abs/2108.12463 And DepthScore https://arxiv.org/abs/2103.12711 And InfoLM (currently unpublished). Is it something you could be interested in...

encounter an error calculating the mover metric. The rouge, bert-score and chrf all work fine. > Calculating scores for the mover_score metric. /opt/conda/conda-bld/pytorch_1595629403081/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [118,0,0], thread: [64,0,0] Assertion `srcIndex...